POPULARITY
Categories
The RAM-apocolypse continues of course, with hints of it hitting general manufacturers, and delay of gaming systems and even spinning harddrives. At least Micron is making some PCIe 6 drives you cannot have. Also, since we are sometimes audio geeks as well as PC, we talk about some bananas. Seriously. You'll just have to listen to get the scoop on bad Chrome extensions, bad Copilot, and bad password managers. Until then, enjoy Unread Tournament 2004!Timestamps:0:00 Intro01:15 Patreon03:22 Food with Josh05:20 Acer and ASUS caught up HEVC patent dispute07:15 Intel's new annual GPU cadence09:00 Micron is making PCI-E 6.0 SSDs that you can't have11:10 WD CEO says storage is already sold out for 202614:07 Warning - many consumer electronics companies will fail this year22:25 Sony may push PS6 launch as far as 202922:55 US reportedly removes two Chinese memory companies from banned list25:54 RTX 5090 LIGHTNING is 5090 USD (list price, anyhow)29:35 Audio dragged through the mud - and a banana34:34 (In)Security Corner45:07 Gaming Quick Hits50:35 Jeremy reviews 25 USD speakers from Cyber Acoustics56:53 Picks of the Week1:08:51 Outro ★ Support this podcast on Patreon ★
This Week In Startups is made possible by:Gusto - Try Gusto today and get 3 months free at http://uber.com/ai-solutionsCrusoe Cloud - Reserve your capacity for the latest GPU's at http://uber.com/ai-solutionsUber AI Solutions - Book a demo today at http://uber.com/ai-solutions*Today's show: It's a packed show! We've got YouTuber and Openclaw enthusiast Matthew Berman, Ryan Yaneli, founder of Nextvisit, and Jason Grad, founder of Massive! We're all in on Openclaw, but we have no doubts there's still room in the market for a GIANT Openclaw consumer app to shift the paradigm. What will that look like? Will it be an app? Will it be baked into the iPhone? Let's explore!**Timestamps:* 00:00 Intro02:04 Why Matthew thinks Openclaw is not ready yet to be brought to the consumer04:45 Jason doesn't want hundreds of different apps, and thousands of tabs05:45 Why Ryan sees open claw giving consumers access to opportunities they couldn't have gotten to otherwise.07:02 Only 10% of people are technical enough to install openclaw08:16 Would Openclaw be better off as an app?08:27 *Gusto*. Check out the online payroll and benefits experts with software built specifically for small business and startups. Try Gusto today and get three months FREE at [Uber.com/twist](http://uber.com/ai-solutions)00:10:52 The killer use case that could bring Openclaw to the consumer00:12:13 Why Meta acquired Manus.00:15:13 How Ryan uses Openclaw in his personal life00:18:44 *Crusoe Cloud*: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit crusoe.ai/savings to reserve your capacity for the latest GPUs today.00:23:24 What Jason's “Clawpod” does00:24:38 Jason demos his Openclaw workflow00:28:23 *Uber AI Solutions -* Your trusted partner to get AI to work in the real world. Book a demo with them TODAY at http://uber.com/ai-solutions00:30:04 How Matt used Openclaw to figure out he's been having stomach issues00:32:27 What will be the ultimate UX for AI?00:38:53 Anthropic has patched the ability to use Openclaw through its pro plan!00:42:20 Matt and Jason hope for a multi-model future — but we haven't made progress!00:52:21 Jason has skepticisms about the Openclaw foundation00:52:59 Ryan predicts a new Openclaw fork coming from the shadows!00:54:21 Peter Steinberger is going to OpenAI, NOT to work with Openclaw… Will he “orphan” openclaw?00:58:19 does raspberry AI stand a chance against Apple?*Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.com/Check out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcp*Follow Lon:X: https://x.com/lons*Follow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelm*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis*Thank you to our partners:*Gusto*. Check out the online payroll and benefits experts with software built specifically for small business and startups. Try Gusto today and get three months FREE at [Uber.com/twist](http://uber.com/ai-solutions)*Crusoe Cloud*: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit [crusoe.ai/savings] to reserve your capacity for the latest GPUs today.*Uber AI Solutions -* Your trusted partner to get AI to work in the real world. Book a demo with them TODAY at [Uber.com/twist](http://uber.com/ai-solutions)Check out all our partner offers: https://partners.launch.co/*Check out Jason's suite of newsletters: https://substack.com/@calacanis*Follow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: [https://www.instagram.com/thisweekinstartups](https://www.instagram.com/thisweekinstartups/)TikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: [https://twistartups.substack.com](https://twistartups.substack.com/)
"Dok svet priča o ChatGPT-ju, mi otkrivamo hardversku revoluciju iz Beograda koja omogućava da AI uopšte postoji, i to 20 puta brže od svega što ste videli.“ U drugoj epizodi serijala Pojačalo specijala u saradnji sa kompanijom Next Sillicon, Ivan razgovara sa Markom Skakunom, AI Team Leadom u njihovoj beogradskoj kancelariji, o revoluciji u svetu veštačke inteligencije i hardvera koji je pokreće. Marko pruža detaljan istorijski pregled evolucije kompjuterske snage – od generičkih CPU-ova, preko specijalizovanih GPU-ova, pa sve do ultra-efikasnih ASIC čipova. Kroz razgovor se prati i razvoj samog AI-ja, od ranih neuronskih mreža i kompjuterske vizije do "Transformer" arhitekture i "Scaling Laws" fenomena koji su omogućili pojavu masivnih jezičkih modela poput ChatGPT-ja, fundamentalno menjajući zahteve koje postavljamo pred hardver. U drugom delu, fokus se prebacuje na jedinstveni pristup koji NextSilicon primenjuje kako bi odgovorio na ove izazove. Marko detaljno objašnjava inovativnu "dataflow" arhitekturu koja se fundamentalno razlikuje od tradicionalnih rešenja, omogućavajući hardveru da bude fleksibilan, adaptivan i energetski efikasniji. Poseban akcenat je stavljen na beogradsku kancelariju, koja nije samo podrška, već ključni razvojni centar gde timovi rade na najnaprednijim aspektima tehnologije – od dizajna čipa do AI kompajlera. Kroz Markovu ličnu priču, saznajemo zašto je rad na ovakvim "cutting-edge" projektima u Srbiji postao ne samo moguć, već i izuzetno privlačan za vrhunske svetske stručnjake. Podržite nas na BuyMeACoffee: https://bit.ly/3uSBmoa Pročitajte transkript ove epizode: https://bit.ly/4kGroRD Posetite naš sajt i prijavite se na našu mailing listu: http://bit.ly/2LUKSBG Prijavite se na naš YouTube kanal: http://bit.ly/2Rgnu7o Pratite Pojačalo na društvenim mrežama: FB: https://www.facebook.com/PojacaloRS/ IG: https://www.instagram.com/pojacalo.rs/ X: https://x.com/PojacaloRS LN: https://www.linkedin.com/company/pojacalo TikTok: https://www.tiktok.com/@pojacalo.rs
Host Jim Love returns after the holidays. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt The episode covers ByteDance's Seedance 2.0 AI video generator, which is producing highly realistic, film-quality scenes and prompting alarm in Hollywood, including comments from screenwriter Rhett Reese and renewed concerns about likeness rights and AI use in entertainment; ByteDance says it is strengthening safeguards to prevent unauthorized use of intellectual property and likenesses. The show reports that Peter Steinberger, creator of the open-source agent tool OpenClaw, is joining OpenAI and the project is becoming part of a foundation for future agent-based AI, while also highlighting OpenClaw's widely discussed security weaknesses and the implications for OpenAI and competitor Anthropic. Western Digital is reported to be sold out of certain hard drive models as AI-related demand absorbs supply, following earlier GPU and memory price pressures. Finally, Ring's Super Bowl ad about finding a lost dog drew criticism for promoting neighborhood camera networks that resemble
Les agents IA sont déjà capables d'accomplir plusieurs heures de travail en autonomie, et leur progression s'accélère. Pour Aymeric Roucher, l'enjeu n'est plus l'AGI, mais la montée en puissance concrète d'une “ultra-intelligence” qui redéfinit les équilibres économiques et géopolitiques.Interview : Aymeric Roucher, ingénieur en machine learning et auteurPunchlinesL'AGI est un concept flou et peu utile.L'impact réel des IA est la vraie métrique.L'horizon d'autonomie double tous les sept mois.La puissance de calcul décide des gagnants.L'eau monte partout pour les métiers.Pourquoi considérez-vous que le concept d'AGI est dépassé ?Le concept d'AGI repose sur l'idée d'une IA au moins aussi performante que l'humain dans tous les domaines. Mais cette définition est floue : de quel humain parle-t-on ? Sur quels domaines ? Et comment mesure-t-on cela ? Ce qui m'intéresse davantage, c'est l'impact réel. Une IA peut être moins bonne que nous sur certains aspects sans importance et pourtant transformer le monde si elle est capable d'accomplir des tâches à très haute valeur ajoutée. C'est pour dépasser cette vision binaire que j'ai choisi le terme d'“ultra-intelligence” dans mon livre Ultra-intelligence – Jusqu'où iront les IA ? (Odile Jacob).Vous proposez l'“horizon d'autonomie” comme nouvelle métrique. Pourquoi est-ce plus pertinent ?Plutôt que de débattre d'une intelligence abstraite, je regarde combien de temps une IA peut travailler seule sur une tâche avec un taux de succès satisfaisant. En 2010, l'IA faisait de l'auto-complétion en une fraction de seconde. Avec les premiers grands modèles conversationnels, elle gérait des tâches de quelques secondes. Aujourd'hui, certains agents atteignent plusieurs heures d'autonomie, avec environ 50 % de succès sur des tâches qui prendraient jusqu'à sept heures à un humain. Selon les mesures publiées, cet horizon d'autonomie double environ tous les sept mois. Si cette tendance se poursuit, on parle bientôt de tâches équivalentes à une semaine de travail.La puissance de calcul est-elle vraiment le facteur décisif dans cette course ?Il existe ce qu'on appelle des lois d'échelle : multiplier par dix la puissance de calcul utilisée à l'entraînement permet d'atteindre un palier supérieur d'intelligence de manière prévisible. C'est pour cela que les grands laboratoires lèvent des montants colossaux et construisent d'immenses centres de calcul. La puissance financière, combinée au talent des ingénieurs, devient déterminante. Les chercheurs eux-mêmes choisissent souvent leur laboratoire en fonction du nombre de GPU auxquels ils auront accès. Tant que l'Europe ne disposera que d'une fraction marginale de la puissance de calcul mondiale, elle restera structurellement en retard.Faut-il craindre une IA qui se rebelle… ou plutôt les humains qui la contrôlent ?Le problème de l'alignement est réel : il faut s'assurer que l'IA fait ce qu'on attend d'elle. Des expériences ont montré que des comportements inattendus peuvent émerger. Mais je pense que le risque le plus crédible n'est pas une IA qui décide seule de prendre le pouvoir. Le danger vient plutôt d'acteurs humains disposant d'IA très puissantes. Il n'est pas nécessaire qu'une machine “se rebelle” pour que les conséquences soient majeures.Allons-nous vers un choc massif sur l'emploi ?À mesure que l'intelligence progresse, elle franchit des seuils qui rendent des métiers accessibles. La traduction écrite a été rapidement automatisée. Le développement logiciel est en train de suivre. Dans beaucoup de professions, il restera un expert capable de superviser et d'optimiser le travail des IA. Mais cet expert pourra faire le travail de dizaines d'autres. L'eau monte partout. Ce ne sera pas un ajustement marginal, mais une transformation profonde.-----------♥️ Soutien : https://mondenumerique.info/don
Es gelte, den "Personenkult" um Stalin zu beenden. So sagte es sein Amtsnachfolger Chruschtschow 1956. Die Abrechnung mit dem Stalinismus löste im gesamten Ostblock Hoffnung aus. Aber sie blieb unvollendet – bis heute.
Hirn ist matschig, daher diesmal direkt medias in res! Es ist Leak-Zeit für die nächste Playstation: 7-8 CPU-Kerne Zen 6c, 2 Kerne Low Power für das Betriebssystem, GPU mit 52 CU RDNA 5 (oder Hybrid-Architektur) und 30GB GDDR7 Speicher. Nach der Aufnahme gab es auch noch Hinweise, dass Sony die PS6 auf 2028 oder 2029 verschieben würde, was niemanden überraschen dürfte. Überrascht waren wir von der Meldung, dass Toyota Connected North America eine eigene Open Source Game Engine entwickelt. Primär für den Einsatz in Fahrzeugen, aber auch „console grade“, also für Konsolen- und PC-Spiele geeignet. Sie haben sich die bestehenden Engines angesehen und dann gesagt: OK, dann machen wir halt selbst was! In der wöchentlichen Rubrik „was machen die jetzt wieder mit ihrem AI-Gedöns“ möchte Microsoft unabhängiger von OpenAI werden und eigene Modelle für unterschiedliche Einsatzbereiche entwickeln. Und zuletzt ein bisschen „Gossip“, Drama aus der Linux-Gaming-Szene: im Mittelpunkt das Open Gaming Collective und die Crypto-/AI-Bros Playtron mit Venture Capital im Hintergrund. Viel Spaß mit Folge 295! Sprecher:innen: Meep, Michael Kister, Mohammed Ali DadAudioproduktion: Michael KisterVideoproduktion: Mohammed Ali Dad, Michael KisterTitelbild: Mohammed Ali DadBildquellen: Sony Playstation/Selbst ErstelltAufnahmedatum: 13.02.2026 Besucht unsim Discord https://discord.gg/SneNarVCBMauf Bluesky https://bsky.app/profile/technikquatsch.deauf Youtube https://www.youtube.com/@technikquatsch https://www.youtube.com/@technikquatschgamingauf TikTok https://www.tiktok.com/@technikquatschauf Instagram https://www.instagram.com/technikquatschauf Twitch https://www.twitch.tv/technikquatsch RSS-Feed https://technikquatsch.de/feed/podcast/Spotify https://open.spotify.com/show/62ZVb7ZvmdtXqqNmnZLF5uApple Podcasts https://podcasts.apple.com/de/podcast/technikquatsch/id1510030975Deezer https://www.deezer.com/de/show/1162032 00:00:00 Herzlich willkommen zu Technikquatsch Folge 295! Fasching, Koffein und Shakes. 00:06:03 Toyota Connected North America entwickelt „Console-grade“ Open Source Game Engine.https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine 00:19:35 Fundsachenauktionen https://www.sonderauktionen.net/ 00:22:21 Mike ist wieder da, Rechner plötzlich abgestürzt, wir verdächtigen Microsoft bzw. Treiber. 00:26:30 Schadcode über Microsoft Notepad per Markdown ausführbar.https://www.golem.de/news/dank-microsofts-feature-wahn-sogar-notepad-bekommt-jetzt-schadcode-luecken-2602-205315.html 00:30:38 Meeps iPhone ärgert sie wieder. Rant zu Benennungen oder so. 00:40:27 Leaks und Gerüchte zu Playstation 6: Zen 6c, 52CUs „RDNA 5“, 30GB GDDR7https://videocardz.com/newz/playstation-handheld-reportedly-with-24gb-memory-ps6-console-with-30gbhttps://www.xboxdynasty.de/news/xbox-next/microsoft-plant-mit-der-gen-10-xbox-die-radikalste-konsolengeneration-seiner-geschichte/ 00:56:06 Playstation State of Play war recht unterhaltsam, God of War Trilogy Remake angeteasert.https://www.youtube.com/watch?v=P0e5YEXtZu0 01:07:21 Aufnahme, Redundanz und PC-Probleme 01:08:27 Hinweis: Superb Owl Halftime Show von Bad Bunnyhttps://www.youtube.com/watch?v=G6FuWd4wNd8 01:10:49 Microsoft möchte weniger auf OpenAI und mehr auf eigene Modelle setzen.https://www.windowscentral.com/artificial-intelligence/microsoft-confirms-plan-to-ditch-openai-as-the-chatgpt-firm-continues-to-beg-big-tech-for-cash 01:16:12 Open Gaming Collective und das Drama um Playtronhttps://opengamingcollective.org/https://www.gamingonlinux.com/2026/01/open-gaming-collective-ogc-formed-to-push-linux-gaming-even-further/https://www.gamingonlinux.com/2026/01/cachyos-founder-explains-why-they-didn-t-join-the-new-open-gaming-collective-ogc/ 01:24:46 CachyOS, PikaOS auf Mikes Aufnahme-PC und der R36S-Handheldhttps://cachyos.org/ https://wiki.pika-os.com/en/home 01:35:32 Wir nähern uns der Folge 300: Ideen, Anregungen gern an uns!
Episode 230 In this episode of The Observers Notebook, host Tim Robertson talks with Daniel Parrott, the creator of Tycho Tracker — a powerful software tool that brings professional-level asteroid detection and measurement to amateur astronomers around the world. Daniel shares the story behind Tycho's development, explains how synthetic tracking helps detect faint moving objects, and offers practical tips for observers submitting data to the Minor Planet Center. They discuss GPU, (graphics processing unit) acceleration, photometric tools, common beginner mistakes, and how community science is reshaping asteroid discovery. Tycho now also supports lightcurve generation, rotation period analysis, and variable-star photometry, expanding its utility far beyond asteroid hunting. Whether you're a seasoned asteroid hunter or just curious about how software can turn your backyard telescope into a discovery machine, this episode is packed with insights and inspiration. You can contact Daniel at: tychotracker@gmail.com Tycho Tracker Home page: https://www.tycho-tracker.com/ For more information you can visit the ALPO web site at: www.alpo-astronomy.org/ You can also support this podcast at Patreon: https://www.patreon.com/ObserversNotebook Listen to the podcast on Soundcloud: https://soundcloud.com/observersnotebook Subscribe on our YouTube Channel: https://www.youtube.com/c/AssociationofLunarandPlanetaryObservers Subscribe on iTunes: https://itunes.apple.com/us/podcast/observers-notebook-the-alpo-podcast/id1199301885?mt=2 I want to thank the Producers of this podcast, Steve Siedentop and Michael Moyer for their generous support of the Observers Notebook. Our Patreons: Jerry White Jason Inman Bob Lunsford Steve Seidentop Stephen Bennett Michael Moyer Shawn Dilles Damian Allis Carl Hergenrother Michael McShan Michael Blake Nick Evetts Stan Sienkiewicz Carl Hergenrother Stan Sienkiewicz John Rogers Jim McCarthy Stanley McMahan
Every six months, we get a new version of Java. Java 26 is just around the corner and will be released soon. But most companies stick to LTS (Long-Term Support) versions, which are maintained and receive security updates for many more years. Versions 8, 11, 17, 21, and 25 are such LTS versions. Hopefully, most of your systems are already on the latest versions and you are not stuck on 8 or earlier. As a reminder, 8 was released in 2014, so much has changed since then.If you are doubting moving from 21 to 25, or even from an earlier version to the latest LTS, this podcast is for you! Together with Jakob Jenkov, we discussed the most important changes, and this episode includes a few quotes from interviews recorded at conferences last year.GuestsJakob Jenkovhttps://www.linkedin.com/in/jakob-jenkov-4a3a8/Jonathan Vilahttps://www.linkedin.com/in/jonathanvila/Ryan Svihlahttps://www.linkedin.com/in/ryan-svihla-096752182/Mary Grygleskihttps://www.linkedin.com/in/mary-grygleski/Anton Arhipovhttps://www.linkedin.com/in/antonarhipov/Ronald Dehuysserhttps://www.linkedin.com/in/ronalddehuysser/Jonathan Ellishttps://www.linkedin.com/in/jbellis/Content00:00 Introduction of topic and guestTutorials by JakobPodcast #89: Quarkus and Agentic Commerce03:30 Bugfixes and performance improvements "under the hoods"Quote Jonathan Vila08:00 Java as a scripting languageQuote Ryan SvihlaCompact Source Files and Instance Main methodsLaunch Multi-File Source-Code Programshttps://www.jbang.dev/Quote Mary Grygleski15:03 GC ImprovementsGenerational ShenandoahTrash Talk - Exploring the JVM memory management by Gerrit GrunwaldWhat Should I Know About Garbage Collection as a Java Developer?19:44 Project Loom: Virtual Threads and Structured ConcurrencyQuote Anton Arhipov29:44 How Java evolves6-months release cycleHow incubator and preview features are used to get feedback from the communityLong-Term Support Short-Term Support versionsFoojay Podcast #28: Java 21 Has Arrived!Foojay Podcast #45: Welcome to Java 22Foojay Podcast #57: Welcome to OpenJDK (Java) 23Foojay Podcast #68: Welcome to OpenJDK (Java) 24Foojay Podcast #78: Welcome to OpenJDK 25!32:15 Project Leyden: Ahead-of-time featuresAhead-of-Time Command-Line ErgonomicsAhead-of-Time Method ProfilingAhead-of-Time Class Loading & Linking39:15 Project BabylonJava on CPU, GPU, FPGA?This is already possible with TornadoVMFoojay Podcast #82: OpenJDK Projects (Leyden, Babylon, Panama) and TornadoVM43:25 Class-File APIQuote Ronald DehuysserJavaFX In Action #22 with Matt Coley, diving into byte code and JARs with Recaf and JavaFX libraries49:20 Foreign Function and Memory APIThe FFM API: How OpenJDK Changed the Game for Native Interactions (And Made Pi4J Better!)jChampions Conference talk 'Foreign Function & Memory (FFM) API on Raspberry Pi'54:26 Vector APIQuote Jonathan Ellis + Ryan Svihla59:59 Removal of String templates01:00:26 Taking a look into the JVM of the future01:03:08 Conclusion
IA é ferramenta. Poderosa. Rápida. Escalável. E completamente indiferente ao que é certo ou errado. Neste episódio do DevSecOps Podcast, mergulhamos nos perigos reais da Inteligência Artificial além do hype e além do medo irracional. Falamos sobre modelos que aprendem vieses humanos, automação de desinformação em escala industrial, geração de código vulnerável com confiança absurda e a falsa sensação de segurança quando “a IA revisou”. IA não é ética. Não é moral. Não é consciente. É estatística com GPU. Discutimos também o impacto prático no desenvolvimento de software e na segurança de aplicações. Devs usando copilots sem validar saída. Times confiando em respostas geradas como se fossem verdade revelada. Ataques potencializados por modelos generativos. Engenharia social turbinada. Deepfakes cada vez mais convincentes. A IA amplia o melhor e o pior de nós. No fim, a pergunta não é se a IA é perigosa. Toda tecnologia poderosa é. A pergunta é: estamos usando com criticidade ou com preguiça intelectual? Porque quando a máquina erra, ela erra em escala. E quando o humano delega o pensamento, ele terceiriza a responsabilidade. E responsabilidade, meu amigo, não dá para fazer deploy automático.Become a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.
Send a textAntimicrobial resistance is a systems failure — not just a prescribing problem. Olivier Niyonshima, Founder of ZeroX Intelligence and Medikal Africa, explains how NVIDIA-backed edge AI infrastructure is transforming healthcare across Africa.Olivier Niyonshima ( https://olivierniyonshima.com/ ) is the Founder and CEO of ZeroX Intelligence ( https://zeroxintelligence.com/ ), a Rwandan–Canadian AI company focused on building practical, production-grade technology for healthcare, agriculture, and education across Africa.He leads Medikal Africa ( https://www.medikalafrica.com/ ), an AI-powered clinical intelligence platform designed to combat antimicrobial resistance and strengthen digital health systems in real-world African healthcare environments. Rather than treating AMR as an awareness problem, Olivier approaches it as a systems failure—one driven by delayed diagnostics, fragmented data, and weak decision support—and is building infrastructure to fix that.Olivier's work has been recognized globally. Medikal Africa is an NVIDIA Inception Startup, selected after technical and product review for its work in GPU-accelerated, edge-first healthcare AI. He also serves as a Youth Digital Health Champion with the Africa Centres for Disease Control and Prevention, contributing to continental digital health strategy, policy dialogue, and systems strengthening.Through ZeroX Intelligence, Olivier is also leading on-the-ground healthcare infrastructure initiatives, including the deployment of autonomous UV-C disinfection robotics to improve infection prevention in clinical settings. His work is grounded in a clear mission: ensuring Africa doesn't just host pilots or consume technology, but leads in building scalable systems that work in real conditions.#OlivierNiyonshima #ZeroXIntelligence #MedikalAfrica#AntimicrobialResistance #AMR #HealthAI #DigitalHealth #AfricaInnovation #NVIDIAInception #EdgeAI #ClinicalAI #HealthcareTechnology #AIinHealthcare #PublicHealth #SuperbugsSupport the show
This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w
While Silicon Valley obsesses over AGI, Timothée Lacroix and the team at Mistral AI are quietly building the industrial and sovereign infrastructure of the future. In his first-ever appearance on a US podcast, the Mistral AI Co-Founder & CTO reveals how the company has evolved from an open-source research lab into a full-stack sovereign AI power—backed by ASML, running on their own massive supercomputing clusters, and deployed in nation-state defense clouds to break the dependency on US hyperscalers.Timothée offers a refreshing, engineer-first perspective on why the current AI hype cycle is misleading. He explains why "Sovereign AI" is not just a geopolitical buzzword but a necessity for any enterprise that wants to own its intelligence rather than rent it. He also provides a contrarian reality check on the industry's obsession with autonomous agents, arguing that "trust" matters more than autonomy and explaining why he prefers building robust "workflows" over unpredictable agents.We also dive deep into the technical reality of competing with the US giants. Timothée breaks down the architecture of the newly released Mistral 3, the "dense vs. MoE" debate, and the launch of Mistral Compute—their own infrastructure designed to handle the physics of modern AI scaling. This is a conversation about the plumbing, the 18,000-GPU clusters, and the hard engineering required to turn AI from a magic trick into a global industrial asset.Timothée LacroixLinkedIn - https://www.linkedin.com/in/timothee-lacroix-59517977/Google Scholar - https://scholar.google.com.do/citations?user=tZGS6dIAAAAJ&hl=en&oi=aoMistral AIWebsite - https://mistral.aiX/Twitter - https://x.com/MistralAIMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFirstMarkWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) — Cold Open(01:27) — Mistral vs. The World: From Research Lab to Sovereign Power(03:48) — Inside Mistral Compute: Building an 18,000 GPU Cluster(08:42) — The Trillion-Dollar Question: Competing Without a Big Tech Parent(10:37) — The Reality of Enterprise AI: Escaping "POC Purgatory"(15:06) — Why Mistral Hires Forward Deployed Engineers (FDEs)(16:57) — The Contrarian Take: Why "Agents" are just "Workflows"(19:35) — Trust > Autonomy: The Truth About Agent Reliability(21:26) — The Missing Stack: Governance and Versioning for AI(26:24) — When Will AI Actually Work? (The 2026 Timeline)(30:33) — Beyond Chat: The "Banger" Sovereign Use Cases(35:46) — Mistral 3 Architecture: Mixture of Experts vs. Dense(43:12) — Synthetic Data & The Post-Training Bottleneck(45:12) — Reasoning Models: Why "Thinking" is Just Tool Use(46:22) — Launching DevStral 2 and the Vibe CLI(50:49) — Engineering Lessons: How to Build Frontier AI Efficiently(56:08) — Timothée's View on AGI & The Future of Intelligence
Retro and current gen gaming chat, with Trev and Stu, its the Console Shockcast! PC gaming handhelds have evolved fast since Valve shook things up by revealing the Steam Deck in 2021—but what should a true second-generation Valve handheld look like? Trev and Stu break down what would actually make a Steam Deck 2 a meaningful upgrade, drawing on features from newer PC handhelds and asking which ones Valve should steal, refine, or ignore entirely. In this episode: We kick things off with EA FC 26 somehow running on a 12-year-old laptop Valve nailed the original form factor—but is it time for a higher-resolution display? Modern handhelds are pushing higher TDPs for extra performance—should Valve move beyond the 15-watt limit? A more modular design, including easily user-replaceable SSDs, for better long-term upgrades Is a 5× performance jump over the original Steam Deck the bare minimum expectation? Bigger batteries for longer, truly untethered gaming sessions Hall-effect joysticks to finally put stick drift in the bin A next-gen dock with Switch-style performance boosts when docked If that's still not enough power… is it time for external GPU support? Whether you're all-in on Valve's handheld future or just curious where PC portables are heading next, this one's for you. Check out Trev and Al’s other podcast where they reminisce about a different episode of Star Trek every month! : https://longrangesensors.com/episodes Intro/Outro Music – Turn The Page (Tutorial Theme) – Ridge Racer Type 4 (1998) – Sony Playstation – Composer(s): Hiroshi Okubo
What actually happens when AI stops being a cloud-only experiment and starts running on desks, in labs, and inside real teams trying to ship real work? In this episode, I sit down with Logan Lawler, Senior Director at Dell Technologies, to unpack how AI workloads are really being built and supported on the ground today. Logan leads Dell's Precision and Pro Max AI Solutions business and hosts Dell's own Reshaping Workflows podcast, giving him a rare vantage point into how engineers, developers, creatives, and data teams are actually working, not how marketing slides suggest they should be. We start by cutting through the noise around AI PCs. At every conference stage, Logan breaks down what genuinely matters when choosing hardware for AI work. CPUs, GPUs, NPUs, memory, and software stacks all play different roles, and misunderstanding those roles often leads teams to overspend or underspec. Logan explains why all AI workstations qualify as AI PCs, but not all AI PCs are suitable for serious AI work, and why GPUs remain central for anyone doing real model development, fine-tuning, or inference at scale. From there, the conversation shifts to a broader architectural rethink. As AI workloads grow heavier and data sensitivity increases, many organizations are reconsidering where compute should live. Logan shares how GPU-powered Dell workstations, storage-rich environments, and hybrid cloud setups are giving teams more control over performance, cost, and data. We explore why local compute is becoming attractive again, how modern GPUs now rival small server setups, and why hybrid workflows, local for development and cloud for deployment, are becoming the default rather than the exception. One of the most compelling parts of the discussion comes when Logan connects hardware choices back to business reality. Drawing on real-world examples, he explains how teams use local AI environments to move faster, reduce cloud costs, and avoid getting locked into architectures that are hard to unwind later. This is not about abandoning the cloud, but about being intentional from the start, mainly as AI usage spreads beyond developers into marketing, operations, and everyday business roles. We also step back to reflect on a deeper challenge. As AI becomes easier to use, what happens to critical thinking, curiosity, and learning? Logan shares a candid perspective, shaped by his experiences as a parent, technologist, and podcast host, raising questions about how tools should support rather than replace thinking. If you are trying to make sense of AI PCs, local versus cloud compute, or how teams are really reshaping workflows with AI hardware today, this conversation offers grounded insight from someone living at the center of it. Are we designing systems that genuinely empower people to think better and build faster, or are we sleepwalking into decisions we will regret later? How do you want your own AI workflow to evolve? Useful Links TLDR AI newsletter and the Neurons. The Reshaping Workflows podcast Connect with Logan Lawler Follow Dell Technologies on LinkedIn
Infrastructure was passé…uncool. Difficult to get dollars from Private Equity and Growth funds, and almost impossible to get a VC fund interested. Now?! Now, it's cool. Infrastructure seems to be having a Renaissance, a full on Rebirth, not just fueled by commercial interests (e.g. advent of AI), but also by industrial policy and geopolitical considerations. In this episode of Tech Deciphered, we explore what's cool in the infrastructure spaces, including mega trends in semiconductors, energy, networking & connectivity, manufacturing Navigation: Intro We're back to building things Why now: the 5 forces behind the renaissance Semiconductors: compute is the new oil Networking & connectivity: digital highways get rebuilt Energy: rebuilding the power stack (not just renewables) Manufacturing: the return of “atoms + bits” Wrap: what it means for startups, incumbents, and investors Conclusion Our co-hosts: Bertrand Schmitt, Entrepreneur in Residence at Red River West, co-founder of App Annie / Data.ai, business angel, advisor to startups and VC funds, @bschmitt Nuno Goncalves Pedro, Investor, Managing Partner, Founder at Chamaeleon, @ngpedro Our show: Tech DECIPHERED brings you the Entrepreneur and Investor views on Big Tech, VC and Start-up news, opinion pieces and research. We decipher their meaning, and add inside knowledge and context. Being nerds, we also discuss the latest gadgets and pop culture news Subscribe To Our Podcast Nuno Gonçalves Pedro Introduction Welcome to episode 73 of Tech Deciphered, Infrastructure, the Rebirth or Renaissance. Infrastructure was passé, it wasn’t cool, but all of a sudden now everyone’s talking about network, talking about compute and semiconductors, talking about logistics, talking about energy. What gives? What’s happened? It was impossible in the past to get any funds, venture capital, even, to be honest, some private equity funds or growth funds interested in some of these areas, but now all of a sudden everyone thinks it’s cool. The infrastructure seems to be having a renaissance, a full-on rebirth. In this episode, we will explore in which cool ways the infrastructure spaces are moving and what’s leading to it. We will deep dive into the forces that are leading us to this. We will deep dive into semiconductors, networking and connectivity, energy, manufacturing, and then we’ll wrap up. Bertrand, so infrastructure is cool now. Bertrand Schmitt We're back to building things Yes. I thought software was going to eat the world. I cannot believe it was then, maybe even 15 years ago, from Andreessen, that quote about software eating the world. I guess it’s an eternal balance. Sometimes you go ahead of yourself, you build a lot of software stack, and at some point, you need the hardware to run this software stack, and there is only so much the bits can do in a world of atoms. Nuno Gonçalves Pedro Obviously, we’ve gone through some of this before. I think what we’re going through right now is AI is eating the world, and because AI is eating the world, it’s driving a lot of this infrastructure building that we need. We don’t have enough energy to be consumed by all these big data centers and hyperscalers. We need to be innovative around network as well because of the consumption in terms of network bandwidth that is linked to that consumption as well. In some ways, it’s not software eating the world, AI is eating the world. Because AI is eating the world, we need to rethink everything around infrastructure and infrastructure becoming cool again. Bertrand Schmitt There is something deeper in this. It’s that the past 10, even 15 years were all about SaaS before AI. SaaS, interestingly enough, was very energy-efficient. When I say SaaS, I mean cloud computing at large. What I mean by energy-efficient is that actually cloud computing help make energy use more efficient because instead of companies having their own separate data centers in many locations, sometimes poorly run from an industrial perspective, replace their own privately run data center with data center run by the super scalers, the hyperscalers of the world. These data centers were run much better in terms of how you manage the coolings, the energy efficiency, the rack density, all of this stuff. Actually, the cloud revolution didn’t increase the use of electricity. The cloud revolution was actually a replacement from your private data center to the hyperscaler data center, which was energy efficient. That’s why we didn’t, even if we are always talking about that growth of cloud computing, we were never feeling the pinch in term of electricity. As you say, we say it all changed because with AI, it was not a simple “Replacement” of locally run infrastructure to a hyperscaler run infrastructure. It was truly adding on top of an existing infrastructure, a new computing infrastructure in a way out of nowhere. Not just any computing infrastructure, an energy infrastructure that was really, really voracious in term of energy use. Nuno Gonçalves Pedro There was one other effect. Obviously, we’ve discussed before, we are in a bubble. We won’t go too much into that today. But the previous big bubble in tech, which is in the late ’90s, there was a lot of infrastructure built. We thought the internet was going to take over back then. It didn’t take over immediately, but there was a lot of network connectivity, bandwidth built back in the day. Companies imploded because of that as well, or had to restructure and go in their chapter 11. A lot of the big telco companies had their own issues back then, etc., but a lot of infrastructure was built back then for this advent of the internet, which would then take a long time to come. In some ways, to your point, there was a lot of latent supply that was built that was around that for a while wasn’t used, but then it was. Now it’s been used, and now we need new stuff. That’s why I feel now we’re having the new moment of infrastructure, new moment of moving forward, aligned a little bit with what you just said around cloud computing and the advent of SaaS, but also around the fact that we had a lot of buildup back in the late ’90s, early ’90s, which we’re now still reaping the benefits on in today’s world. Bertrand Schmitt Yeah, that’s actually a great point because what was built in the late ’90s, there was a lot of fibre that was built. Laying out the fibre either across countries, inside countries. This fibre, interestingly enough, you could just change the computing on both sides of the fibre, the routing, the modems, and upgrade the capacity of the fibre. But the fibre was the same in between. The big investment, CapEx investment, was really lying down that fibre, but then you could really upgrade easily. Even if both ends of the fibre were either using very old infrastructure from the ’90s or were actually dark and not being put to use, step by step, it was being put to use, equipment was replaced, and step by step, you could keep using more and more of this fibre. It was a very interesting development, as you say, because it could be expanded over the years, where if we talk about GPUs, use for AI, GPUs, the interesting part is actually it’s totally the opposite. After a few years, it’s useless. Some like Google, will argue that they can depreciate over 5, 6 years, even some GPUs. But at the end of the day, the difference in perf and energy efficiency of the GPUs means that if you are energy constrained, you just want to replace the old one even as young as three-year-old. You have to look at Nvidia increasing spec, generation after generation. It’s pretty insane. It’s usually at least 3X year over year in term of performance. Nuno Gonçalves Pedro At this moment in time, it’s very clear that it’s happening. Why now: the 5 forces behind the renaissance Maybe let’s deep dive into why it’s happening now. What are the key forces around this? We’ve identified, I think, five forces that are particularly vital that lead to the world we’re in right now. One we’ve already talked about, which is AI, the demand shock and everything that’s happened because of AI. Data centers drive power demand, drive grid upgrades, drive innovative ways of getting energy, drive chips, drive networking, drive cooling, drive manufacturing, drive all the things that we’re going to talk in just a bit. One second element that we could probably highlight in terms of the forces that are behind this is obviously where we are in terms of cost curves around technology. Obviously, a lot of things are becoming much cheaper. The simulation of physical behaviours has become a lot more cheap, which in itself, this becomes almost a vicious cycle in of itself, then drives the adoption of more and more AI and stuff. But anyway, the simulation is becoming more and more accessible, so you can do a lot of simulation with digital twins and other things off the real world before you go into the real world. Robotics itself is becoming, obviously, cheaper. Hardware, a lot of the hardware is becoming cheaper. Computer has become cheaper as well. Obviously, there’s a lot of cost curves that have aligned that, and that’s maybe the second force that I would highlight. Obviously, funds are catching up. We’ll leave that a little bit to the end. We’ll do a wrap-up and talk a little bit about the implications to investors. But there’s a lot of capital out there, some capital related to industrial policy, other capital related to private initiative, private equity, growth funds, even venture capital, to be honest, and a few other elements on that. That would be a third force that I would highlight. Bertrand Schmitt Yes. Interestingly enough, in terms of capital use, and we’ll talk more about this, but some firms, if we are talking about energy investment, it was very difficult to invest if you are not investing in green energy. Now I think more and more firms and banks are willing to invest or support different type of energy infrastructure, not just, “Green energy.” That’s an interesting development because at some point it became near impossible to invest more in gas development, in oil development in the US or in most Western countries. At least in the US, this is dramatically changing the framework. Nuno Gonçalves Pedro Maybe to add the two last forces that I think we see behind the renaissance of what’s happening in infrastructure. They go hand in hand. One is the geopolitics of the world right now. Obviously, the world was global flat, and now it’s becoming increasingly siloed, so people are playing it to their own interests. There’s a lot of replication of infrastructure as well because people want to be autonomous, and they want to drive their own ability to serve end consumers, businesses, etc., in terms of data centers and everything else. That ability has led to things like, for example, chips shortage. The fact that there are semiconductors, there are shortages across the board, like memory shortages, where everything is packed up until 2027 of 2028. A lot of the memory that was being produced is already spoken for, which is shocking. There’s obviously generation of supply chain fragilities, obviously, some of it because of policies, for example, in the US with tariffs, etc, security of energy, etc. Then the last force directly linked to the geopolitics is the opposite of it, which is the policy as an accelerant, so to speak, as something that is accelerating development, where because of those silos, individual countries, as part their industrial policy, then want to put capital behind their local ecosystems, their local companies, so that their local companies and their local systems are for sure the winners, or at least, at the very least, serve their own local markets. I think that’s true of a lot of the things we’re seeing, for example, in the US with the Chips Act, for semiconductors, with IGA, IRA, and other elements of what we’ve seen in terms of practices, policies that have been implemented even in Europe, China, and other parts of the world. Bertrand Schmitt Talking about chips shortages, it’s pretty insane what has been happening with memory. Just the past few weeks, I have seen a close to 3X increase in price in memory prices in a matter of weeks. Apparently, it started with a huge order from OpenAI. Apparently, they have tried to corner the memory market. Interestingly enough, it has flat-footed the entire industry, and that includes Google, that includes Microsoft. There are rumours of their teams now having moved to South Korea, so they are closer to the action in terms of memory factories and memory decision-making. There are rumours of execs who got fired because they didn’t prepare for this type of eventuality or didn’t lock in some of the supply chain because that memory was initially for AI, but obviously, it impacts everything because factories making memories, you have to plan years in advance to build memories. You cannot open new lines of manufacturing like this. All factories that are going to open, we know when they are going to open because they’ve been built up for years. There is no extra capacity suddenly. At the very best, you can change a bit your line of production from one type of memory to another type. But that’s probably about it. Nuno Gonçalves Pedro Just to be clear, all these transformations we’re seeing isn’t to say just hardware is back, right? It’s not just hardware. There’s physicality. The buildings are coming back, right? It’s full stack. Software is here. That’s why everything is happening. Policy is here. Finance is here. It’s a little bit like the name of the movie, right? Everything everywhere all at once. Everything’s happening. It was in some ways driven by the upper stacks, by the app layers, by the platform layers. But now we need new infrastructure. We need more infrastructure. We need it very, very quickly. We need it today. We’re already lacking in it. Semiconductors: compute is the new oil Maybe that’s a good segue into the first piece of the whole infrastructure thing that’s driving now the most valuable company in the world, NVIDIA, which is semiconductors. Semiconductors are driving compute. Semis are the foundation of infrastructure as a compute. Everyone needs it for every thing, for every activity, not just for compute, but even for sensors, for actuators, everything else. That’s the beginning of it all. Semiconductor is one of the key pieces around the infrastructure stack that’s being built at scale at this moment in time. Bertrand Schmitt Yes. What’s interesting is that if we look at the market gap of Semis versus software as a service, cloud companies, there has been a widening gap the past year. I forgot the exact numbers, but we were talking about plus 20, 25% for Semis in term of market gap and minus 5, minus 10 for SaaS companies. That’s another trend that’s happening. Why is this happening? One, because semiconductors are core to the AI build-up, you cannot go around without them. But two, it’s also raising a lot of questions about the durability of the SaaS, a software-as-a-service business model. Because if suddenly we have better AI, and that’s all everyone is talking about to justify the investment in AI, that it keeps getting better, and it keeps improving, and it’s going to replace your engineers, your software engineers. Then maybe all of this moat that software companies built up over the years or decades, sometimes, might unravel under the pressure of newly coded, newly built, cheaper alternatives built from the ground up with AI support. It’s not just that, yes, semiconductors are doing great. It’s also as a result of that AI underlying trend that software is doing worse right now. Nuno Gonçalves Pedro At the end of the day, this foundational piece of infrastructure, semiconductor, is obviously getting manifest to many things, fabrication, manufacturing, packaging, materials, equipment. Everything’s being driven, ASML, etc. There are all these different players around the world that are having skyrocket valuations now, it’s because they’re all part of the value chain. Just to be very, very clear, there’s two elements of this that I think are very important for us to remember at this point in time. One, it’s the entire value chains are being shifted. It’s not just the chips that basically lead to computing in the strict sense of it. It’s like chips, for example, that drive, for example, network switching. We’re going to talk about networking a bit, but you need chips to drive better network switching. That’s getting revolutionised as well. For example, we have an investment in that space, a company called the eridu.ai, and they’re revolutionising one of the pieces around that stack. Second part of the puzzle, so obviously, besides the holistic view of the world that’s changing in terms of value change, the second piece of the puzzle is, as we discussed before, there’s industrial policy. We already mentioned the CHIPS Act, which is something, for example, that has been done in the US, which I think is 52 billion in incentives across a variety of things, grants, loans, and other mechanisms to incentivise players to scale capacity quick and to scale capacity locally in the US. One of the effects of that now is obviously we had the TSMC, US expansion with a factory here in the US. We have other levels of expansion going on with Intel, Samsung, and others that are happening as we speak. Again, it’s this two by two. It’s market forces that drive the need for fundamental shifts in the value chain. On the other industrial policy and actual money put forward by states, by governments, by entities that want to revolutionise their own local markets. Bertrand Schmitt Yes. When you talk about networking, it makes me think about what NVIDIA did more than six years ago when they acquired Mellanox. At the time, it was largest acquisition for NVIDIA in 2019, and it was networking for the data center. Not networking across data center, but inside the data center, and basically making sure that your GPUs, the different computers, can talk as fast as possible between each of them. I think that’s one piece of the puzzle that a lot of companies are missing, by the way, about NVIDIA is that they are truly providing full systems. They are not just providing a GPU. Some of their competitors are just providing GPUs. But NVIDIA can provide you the full rack. Now, they move to liquid-cool computing as well. They design their systems with liquid cooling in mind. They have a very different approach in the industry. It’s a systematic system-level approach to how do you optimize your data center. Quite frankly, that’s a bit hard to beat. Nuno Gonçalves Pedro For those listening, you’d be like, this is all very different. Semiconductors, networking, energy, manufacturing, this is all different. Then all of a sudden, as Bertrand is saying, well, there are some players that are acting across the stack. Then you see in the same sentence, you’re talking about nuclear power in Microsoft or nuclear power in Google, and you’re like, what happened? Why are these guys in the same sentence? It’s like they’re tech companies. Why are they talking about energy? It’s the nature of that. These ecosystems need to go hand in hand. The value chains are very deep. For you to actually reap the benefits of more and more, for example, semiconductor availability, you have to have better and better networking connectivity, and you have to have more and more energy at lower and lower costs, and all of that. All these things are intrinsically linked. That’s why you see all these big tech companies working across stack, NVIDIA being a great example of that in trying to create truly a systems approach to the world, as Bertrand was mentioning. Networking & connectivity: digital highways get rebuilt On the networking and connectivity side, as we said, we had a lot of fibre that was put down, etc, but there’s still more build-out needs to be done. 5G in terms of its densification is still happening. We’re now starting to talk, obviously, about 6G. I’m not sure most telcos are very happy about that because they just have been doing all this CapEx and all this deployment into 5G, and now people already started talking about 6G and what’s next. Obviously, data center interconnect is quite important, and all the hubbing that needs to happen around data centers is very, very important. We are seeing a lot movements around connectivity that are particularly important. Network gear and the emergence of players like Broadcom in terms of the semiconductor side of the fence, obviously, Cisco, Juniper, Arista, and others that are very much present in this space. As I said, we made an investment on the semiconductor side of networking as well, realizing that there’s still a lot of bottlenecks happening there. But obviously, the networking and connectivity stack still needs to be built at all levels within the data centers, outside of the data centers in terms of last mile, across the board in terms of fibre. We’re seeing a lot of movements still around the space. It’s what connects everything. At the end of the day, if there’s too much latency in these systems, if the bandwidths are not high enough, then we’re going to have huge bottlenecks that are going to be put at the table by a networking providers. Obviously, that doesn’t help anyone. If there’s a button like anywhere, it doesn’t work. All of this doesn’t work. Bertrand Schmitt Yes. Interestingly enough, I know we said for this episode, we not talk too much about space, but when you talk about 6G, it make me think about, of course, Starlink. That’s really your last mile delivery that’s being built as well. It’s a massive investment. We’re talking about thousands of satellites that are interconnected between each other through laser system. This is changing dramatically how companies can operate, how individuals can operate. For companies, you can have great connectivity from anywhere in the world. For military, it’s the same. For individuals, suddenly, you won’t have dead space, wide zones. This is also a part of changing how we could do things. It’s quite important even in the development of AI because, yes, you can have AI at the edge, but that interconnect to the rest of the system is quite critical. Having that availability of a network link, high-quality network link from anywhere is a great combo. Nuno Gonçalves Pedro Then you start seeing regions of the world that want to differentiate to attract digital nomads by saying, “We have submarine cables that come and hub through us, and therefore, our connectivity is amazing.” I was just in Madeira, and they were talking about that in Portugal. One of the islands of Portugal. We have some Marine cables. You have great connectivity. We’re getting into that discussion where people are like, I don’t care. I mean, I don’t know. I assume I have decent connectivity. People actually care about decent connectivity. This discussion is not just happening at corporate level, at enterprise level? Etc. Even consumers, even people that want to work remotely or be based somewhere else in the world. It’s like, This is important Where is there a great connectivity for me so that I can have access to the services I need? Etc. Everyone becomes aware of everything. We had a cloud flare mishap more recently that the CEO had to jump online and explain deeply, technically and deeply, what happened. Because we’re in their heads. If Cloudflare goes down, there’s a lot of websites that don’t work. All of this, I think, is now becoming du jour rather than just an afterthought. Maybe we’ll think about that in the future. Bertrand Schmitt Totally. I think your life is being changed for network connectivity, so life of individuals, companies. I mean, everything. Look at airlines and ships and cruise ships. Now is the advent of satellite connectivity. It’s dramatically changing our experience. Nuno Gonçalves Pedro Indeed. Energy: rebuilding the power stack (not just renewables) Moving maybe to energy. We’ve talked about energy quite a bit in the past. Maybe we start with the one that we didn’t talk as much, although we did mention it, which was, let’s call it the fossil infrastructure, what’s happening around there. Everyone was saying, it’s all going to be renewables and green. We’ve had a shift of power, geopolitics. Honestly, I the writing was on the wall that we needed a lot more energy creation. It wasn’t either or. We needed other sources to be as efficient as possible. Obviously, we see a lot of work happening around there that many would have thought, Well, all this infrastructure doesn’t matter anymore. Now we’re seeing LNG terminals, pipelines, petrochemical capacity being pushed up, a lot of stuff happening around markets in terms of export, and not only around export, but also around overall distribution and increases and improvements so that there’s less leakage, distribution of energy, etc. In some ways, people say, it’s controversial, but it’s like we don’t have enough energy to spare. We’re already behind, so we need as much as we can. We need to figure out the way to really extract as much as we can from even natural resources, which In many people’s mind, it’s almost like blasphemous to talk about, but it is where we are. Obviously, there’s a lot of renaissance also happening on the fossil infrastructure basis, so to speak. Bertrand Schmitt Personally, I’m ecstatic that there is a renaissance going regarding what is called fossil infrastructure. Oil and gas, it’s critical to humanity well-being. You never had growth of countries without energy growth and nothing else can come close. Nuclear could come close, but it takes decades to deploy. I think it’s great. It’s great for developed economies so that they do better, they can expand faster. It’s great for third-world countries who have no realistic other choice. I really don’t know what happened the past 10, 15 years and why this was suddenly blasphemous. But I’m glad that, strangely, thanks to AI, we are back to a more rational mindset about energy and making sure we get efficient energy where we can. Obviously, nuclear is getting a second act. Nuno Gonçalves Pedro I know you would be. We’ve been talking about for a long time, and you’ve been talking about it in particular for a very long time. Bertrand Schmitt Yes, definitely. It’s been one area of interest of mine for 25 years. I don’t know. I’ve been shocked about what happened in Europe, that willingness destruction of energy infrastructure, especially in Germany. Just a few months ago, they keep destroying on live TV some nuclear station in perfect working condition and replacing them with coal. I’m not sure there is a better definition of insanity at this stage. It looks like it’s only the Germans going that hardcore for some reason, but at least the French have stopped their program of decommissioning. America, it seems to be doing the same, so it’s great. On top of it, there are new generations that could be put to use. The Chinese are building up a very large nuclear reactor program, more than 100 reactors in construction for the next 10 years. I think everybody has to catch up because at some point, this is the most efficient energy solution. Especially if you don’t build crazy constraints around the construction of these nuclear reactors. If we are rational about permits, about energy, about safety, there are great things we could be doing with nuclear. That might be one of the only solution if we want to be competitive, because when energy prices go down like crazy, like in China, they will do once they have reach delivery of their significant build-up of nuclear reactors, we better be ready to have similar options from a cost perspective. Nuno Gonçalves Pedro From the outside, at the very least, nuclear seems to be probably in the energy one of the areas that’s more being innovated at this moment in time. You have startups in the space, you have a lot really money going into it, not just your classic industrial development. That’s very exciting. Moving maybe to the carbonization and what’s happening. The CCUS, and for those who don’t know what it is, carbon capture, utilization, and storage. There’s a lot of stuff happening around that space. That’s the area that deals with the ability to capture CO₂ emissions from industrial sources and/or the atmosphere and preventing their release. There’s a lot of things happening in that space. There’s also a lot of things happening around hydrogen and geothermal and really creating the ability to storage or to store, rather, energy that then can be put back into the grids at the right time. There’s a lot of interesting pieces happening around this. There’s some startup movement in the space. It’s been a long time coming, the reuse of a lot of these industrial sources. Not sure it’s as much on the news as nuclear, and oil and gas, but certainly there’s a lot of exciting things happening there. Bertrand Schmitt I’m a bit more dubious here, but I think geothermal makes sense if it’s available at reasonable price. I don’t think hydrogen technology has proven its value. Concerning carbon capture, I’m not sure how much it’s really going to provide in terms of energy needs, but why not? Nuno Gonçalves Pedro Fuels niche, again, from the outside, we’re not energy experts, but certainly, there are movements in the space. We’ll see what’s happening. One area where there’s definitely a lot of movement is this notion of grid and storage. On the one hand, that transmission needs to be built out. It needs to be better. We’ve had issues of blackouts in the US. We’ve had issues of blackouts all around the world, almost. Portugal as well, for a significant part of the time. The ability to work around transmission lines, transformers, substations, the modernization of some of this infrastructure, and the move forward of it is pretty critical. But at the other end, there’s the edge. Then, on the edge, you have the ability to store. We should have, better mechanisms to store energy that are less leaky in terms of energy storage. Obviously, there’s a lot of movement around that. Some of it driven just by commercial stuff, like Tesla a lot with their storage stuff, etc. Some of it really driven at scale by energy players that have the interest that, for example, some of the storage starts happening closer to the consumption as well. But there’s a lot of exciting things happening in that space, and that is a transformative space. In some ways, the bottleneck of energy is also around transmission and then ultimately the access to energy by homes, by businesses, by industries, etc. Bertrand Schmitt I would say some of the blackout are truly man-made. If I pick on California, for instance. That’s the logical conclusion of the regulatory system in place in California. On one side, you limit price that energy supplier can sell. The utility company can sell, too. On the other side, you force them to decommission the most energy-efficient and least expensive energy source. That means you cap the revenues, you make the cost increase. What is the result? The result is you cannot invest anymore to support a grid and to support transmission. That’s 100% obvious. That’s what happened, at least in many places. The solution is stop crazy regulations that makes no economic sense whatsoever. Then, strangely enough, you can invest again in transmission, in maintenance, and all I love this stuff. Maybe another piece, if we pick in California, if you authorize building construction in areas where fires are easy, that’s also a very costly to support from utility perspective, because then you are creating more risk. You are forced buy the state to connect these new constructions to the grid. You have more maintenance. If it fails, you can create fire. If you create fire, you have to pay billions of fees. I just want to highlight that some of this is not a technological issue, is not per se an investment issue, but it’s simply the result of very bad regulations. I hope that some will learn, and some change will be made so that utilities can do their job better. Nuno Gonçalves Pedro Then last, but not the least, on the energy side, energy is becoming more and more digitally defined in some ways. It’s like the analogy to networks that they’ve become more, and more software defined, where you have, at the edge is things like smart meters. There’s a lot of things you can do around the key elements of the business model, like dynamic pricing and other elements. Demand response, one of the areas that I invested in, I invest in a company called Omconnect that’s now merged with what used to be Google Nest. Where to deploy that ability to do demand response and also pass it to consumers so that consumers can reduce their consumption at times where is the least price effective or the less green or the less good for the energy companies to produce energy. We have other things that are happening, which are interesting. Obviously, we have a lot more electric vehicles in cars, etc. These are also elements of storage. They don’t look like elements of storage, but the car has electricity in it once you charge it. Once it’s charged, what do you do with it? Could you do something else? Like the whole reverse charging piece that we also see now today in mobile devices and other edge devices, so to speak. That also changes the architecture of what we’re seeing around the space. With AI, there’s a lot of elements that change around the value chain. The ability to do forecasting, the ability to have, for example, virtual power plans because of just designated storage out there, etc. Interesting times happening. Not sure all utilities around the world, all energy providers around the world are innovating at the same pace and in the same way. But certainly just looking at the industry and talking to a lot of players that are CEOs of some of these companies. That are leading innovation for some of these companies, there’s definitely a lot more happening now in the last few years than maybe over the last few decades. Very exciting times. Bertrand Schmitt I think there are two interesting points in what you say. Talking about EVs, for instance, a Cybertruck is able to send electricity back to your home if your home is able to receive electricity from that source. Usually, you have some changes to make to the meter system, to your panel. That’s one great way to potentially use your car battery. Another piece of the puzzle is that, strangely enough, most strangely enough, there has been a big push to EV, but at the same time, there has not been a push to provide more electricity. But if you replace cars that use gasoline by electric vehicles that use electricity, you need to deliver more electricity. It doesn’t require a PhD to get that. But, strangely enough, nothing was done. Nuno Gonçalves Pedro Apparently, it does. Bertrand Schmitt I remember that study in France where they say that, if people were all to switch to EV, we will need 10 more nuclear reactors just on the way from Paris to Nice to the Côte d’Azur, the French Rivière, in order to provide electricity to the cars going there during the summer vacation. But I mean, guess what? No nuclear plant is being built along the way. Good luck charging your vehicles. I think that’s another limit that has been happening to the grid is more electric vehicles that require charging when the related infrastructure has not been upgraded to support more. Actually, it has quite the opposite. In many cases, we had situation of nuclear reactors closing down, so other facilities closing down. Obviously, the end result is an increase in price of electricity, at least in some states and countries that have not sold that fully out. Nuno Gonçalves Pedro Manufacturing: the return of “atoms + bits” Moving to manufacturing and what’s happening around manufacturing, manufacturing technology. There’s maybe the case to be made that manufacturing is getting replatformed, right? It’s getting redefined. Some of it is very obvious, and it’s already been ongoing for a couple of decades, which is the advent of and more and more either robotic augmented factories or just fully roboticized factories, where there’s very little presence of human beings. There’s elements of that. There’s the element of software definition on top of it, like simulation. A lot of automation is going on. A lot of AI has been applied to some lines in terms of vision, safety. We have an investment in a company called Sauter Analytics that is very focused on that from the perspective of employees and when they’re still humans in the loop, so to speak, and the ability to really figure out when people are at risk and other elements of what’s happening occurring from that. But there’s more than that. There’s a little bit of a renaissance in and of itself. Factories are, initially, if we go back a couple of decades ago, factories were, and manufacturing was very much defined from the setup. Now it’s difficult to innovate, it’s difficult to shift the line, it’s difficult to change how things are done in the line. With the advent of new factories that have less legacy, that have more flexible systems, not only in terms of software, but also in terms of hardware and robotics, it allows us to, for example, change and shift lines much more easily to different functions, which will hopefully, over time, not only reduce dramatically the cost of production. But also increase dramatically the yield, it increases dramatically the production itself. A lot of cool stuff happening in that space. Bertrand Schmitt It’s exciting to see that. One thing this current administration in the US has been betting on is not just hoping for construction renaissance. Especially on the factory side, up of factories, but their mindset was two things. One, should I force more companies to build locally because it would be cheaper? Two, increase output and supply of energy so that running factories here in the US would be cheaper than anywhere else. Maybe not cheaper than China, but certainly we get is cheaper than Europe. But three, it’s also the belief that thanks to AI, we will be able to have more efficient factories. There is always that question, do Americans to still keep making clothes, for instance, in factories. That used to be the case maybe 50 years ago, but this move to China, this move to Bangladesh, this move to different places. That’s not the goal. But it can make sense that indeed there is ability, thanks to robots and AI, to have more automated factories, and these factories could be run more efficiently, and as a result, it would be priced-competitive, even if run in the US. When you want to think about it, that has been, for instance, the South Korean playbook. More automated factories, robotics, all of this, because that was the only way to compete against China, which has a near infinite or used to have a near infinite supply of cheaper labour. I think that all of this combined can make a lot of sense. In a way, it’s probably creating a perfect storm. Maybe another piece of the puzzle this administration has been working on pretty hard is simplifying all the permitting process. Because a big chunk of the problem is that if your permitting is very complex, very expensive, what take two years to build become four years, five years, 10 years. The investment mass is not the same in that situation. I think that’s a very important part of the puzzle. It’s use this opportunity to reduce regulatory state, make sure that things are more efficient. Also, things are less at risk of bribery and fraud because all these regulations, there might be ways around. I think it’s quite critical to really be careful about this. Maybe last piece of the puzzle is the way accounting works. There are new rules now in 2026 in the US where you can fully depreciate your CapEx much faster than before. That’s a big win for manufacturing in the US. Suddenly, you can depreciate much faster some of your CapEx investment in manufacturing. Nuno Gonçalves Pedro Just going back to a point you made and then moving it forward, even China, with being now probably the country in the world with the highest rate of innovation and take up of industrial robots. Because of demographic issues a little bit what led Japan the first place to be one of the real big innovators around robots in general. The fact that demographics, you’re having an aging population, less and less children. How are you going to replace all these people? Moving that into big winners, who becomes a big winner in a space where manufacturing is fundamentally changing? Obviously, there’s the big four of robots, which is ABB, FANUC, KUKA, and Yaskawa. Epson, I think, is now in there, although it’s not considered one of the big four. Kawasaki, Denso, Universal Robots. There’s a really big robotics, industrial robotic companies in the space from different origins, FANUC and Yaskawa, and Epson from Japan, KUKA from Germany, ABB from Switzerland, Sweden. A lot of now emerging companies from China, and what’s happening in that space is quite interesting. On the other hand, also, other winners will include players that will be integrators that will build some of the rest of the infrastructure that goes into manufacturing, the Siemens of the world, the Schneider’s, the Rockwell’s that will lead to fundamental industrial automation. Some big winners in there that whose names are well known, so probably not a huge amount of surprises there. There’s movements. As I said, we’re still going to see the big Chinese players emerging in the world. There are startups that are innovating around a lot of the edges that are significant in this space. We’ll see if this is a space that will just be continued to be dominated by the big foreign robotics and by a couple of others and by the big integrators or not. Bertrand Schmitt I think you are right to remind about China because China has been moving very fast in robotics. Some Chinese companies are world-class in their use of robotics. You have this strange mix of some older industries where robotics might not be so much put to use and typically state-owned, versus some private companies, typically some tech companies that are reconverting into hardware in some situation. That went all in terms of robotics use and their demonstrations, an example of what’s happening in China. Definitely, the Chinese are not resting. Everyone smart enough is playing that game from the Americans, the Chinese, Japanese, the South Koreans. Nuno Gonçalves Pedro Exciting things are manufacturing, and maybe to bring it all together, what does it mean for all the big players out there? If we talk with startups and talk about startups, we didn’t mention a ton of startups today, right? Maybe incumbent wind across the board. But on a more serious note, we did mention a few. For example, in nuclear energy, there’s a lot of startups that have been, some of them, incredibly well-funded at this moment in time. Wrap: what it means for startups, incumbents, and investors There might be some big disruptions that will come out of startups, for example, in that space. On the chipset side, we talked about the big gorillas, the NVIDIAs, AMDs, Intel, etc., of the world. But we didn’t quite talk about the fact that there’s a lot of innovation, again, happening on the edges with new players going after very large niches, be it in networking and switching. Be it in compute and other areas that will need different, more specialized solutions. Potentially in terms of compute or in terms of semiconductor deployments. I think there’s still some opportunities there, maybe not to be the winner takes all thing, but certainly around a lot of very significant niches that might grow very fast. Manufacturing, we mentioned the same. Some of the incumbents seem to be in the driving seat. We’ll see what happens if some startups will come in and take some of the momentum there, probably less likely. There are spaces where the value chains are very tightly built around the OEMs and then the suppliers overall, classically the tier one suppliers across value chains. Maybe there is some startup investment play. We certainly have played in the couple of the spaces. I mentioned already some of them today, but this is maybe where the incumbents have it all to lose. It’s more for them to lose rather than for the startups to win just because of the scale of what needs to be done and what needs to be deployed. Bertrand Schmitt I know. That’s interesting point. I think some players in energy production, for instance, are moving very fast and behaving not only like startups. Usually, it’s independent energy suppliers who are not kept by too much regulations that get moved faster. Utility companies, as we just discussed, have more constraints. I would like to say that if you take semiconductor space, there has been quite a lot of startup activities way more than usual, and there have been some incredible success. Just a few weeks ago, Rock got more or less acquired. Now, you have to play games. It’s not an outright acquisition, but $20 billion for an IP licensing agreement that’s close to an acquisition. That’s an incredible success for a company. Started maybe 10 years ago. You have another Cerebras, one of the competitor valued, I believe, quite a lot in similar range. I think there is definitely some activity. It’s definitely a different game compared to your software startup in terms of investment. But as we have seen with AI in general, the need for investment might be larger these days. Yes, it might be either traditional players if they can move fast enough, to be frank, because some of them, when you have decades of being run as a slow-moving company, it’s hard to change things. At the same time, it looks like VCs are getting bigger. Wall Street is getting more ready to finance some of these companies. I think there will be opportunities for startups, but definitely different types of startups in terms of profile. Nuno Gonçalves Pedro Exactly. From an investor standpoint, I think on the VC side, at least our core belief is that it’s more niche. It’s more around big niches that need to be fundamentally disrupted or solutions that require fundamental interoperability and integration where the incumbents have no motivation to do it. Things that are a little bit more either packaging on the semiconductor side or other elements of actual interoperability. Even at the software layer side that feeds into infrastructure. If you’re a growth investor, a private equity investor, there’s other plays that are available to you. A lot of these projects need to be funded and need to be scaled. Now we’re seeing projects being funded even for a very large, we mentioned it in one of the previous episodes, for a very large tech companies. When Meta, for example, is going to the market to get funding for data centers, etc. There’s projects to be funded there because just the quantum and scale of some of these projects, either because of financial interest for specifically the tech companies or for other reasons, but they need to be funded by the market. There’s other place right now, certainly if you’re a larger private equity growth investor, and you want to come into the market and do projects. Even public-private financing is now available for a lot of things. Definitely, there’s a lot of things emanating that require a lot of funding, even for large-scale projects. Which means the advent of some of these projects and where realization is hopefully more of a given than in other circumstances, because there’s actual commercial capital behind it and private capital behind it to fuel it as well, not just industrial policy and money from governments. Bertrand Schmitt There was this quite incredible stat. I guess everyone heard about that incredible growth in GDP in Q3 in the US at 4.4%. Apparently, half of that growth, so around 2.2% point, has been coming from AI and related infrastructure investment. That’s pretty massive. Half of your GDP growth coming from something that was not there three years ago or there, but not at this intensity of investment. That’s the numbers we are talking about. I’m hearing that there is a good chance that in 2026, we’re talking about five, even potentially 6% GDP growth. Again, half of it potentially coming from AI and all the related infrastructure growth that’s coming with AI. As a conclusion for this episode on infrastructure, as we just said, it’s not just AI, it’s a whole stack, and it’s manufacturing in general as well. Definitely in the US, in China, there is a lot going on. As we have seen, computing needs connectivity, networks, need power, energy and grid, and all of this needs production capacity and manufacturing. Manufacturing can benefit from AI as well. That way the loop is fully going back on itself. Infrastructure is the next big thing. It’s an opportunity, probably more for incumbents, but certainly, as usual, with such big growth opportunities for startups as well. Thank you, Nuno. Nuno Gonçalves Pedro Thank you, Bertrand.
Atombeam CEO Charles Yeomans joins Chris Lustrino to break down a deceptively simple idea with massive implications: make data smaller while it's streaming so you can move and process more of it—without upgrading networks.Charles explains Atombeam's commercial product NeurPack, how it can often quadruple effective bandwidth, and why this matters across IoT, smart meters, satellites, defense, oil & gas wells, fintech, and eventually data centers and GPU utilization. They also dig into the realities of commercialization—choosing near-term deals that close fast while still pursuing multi-year “industry standard” opportunities—and why execution (not invention) is the real differentiator.00:00 What Atombeam does (pizza analogy)03:13 NeurPack explained05:35 Why 95% of IoT data doesn't move09:38 “Like launching 3 more satellites”13:57 Commercialization + customers16:31 Data centers + GPU utilization24:29 Defense traction + partnerships26:44 What success looks like (distribution)
Just a brief podcast that talks about the VMware{code} labs at connect and the GPU's we ordered for the MInisform systems. Running Private AI. Technical difficulties made this a short podcast.
Graphics processing units (GPUs) have become the most important commodity in the AI boom — and have made Nvidia a multi-trillion dollar company. But the tensor processing unit (TPU) could present itself as competition for the GPU.TPUs are developed by Google specifically for AI workloads. And so far, Anthropic, OpenAI and Meta have reportedly made deals for Google's TPUs.Christopher Miller, historian at Tufts University and author of "Chip War: The Fight for the World's Most Critical Technology," explains what this could mean.
Graphics processing units (GPUs) have become the most important commodity in the AI boom — and have made Nvidia a multi-trillion dollar company. But the tensor processing unit (TPU) could present itself as competition for the GPU.TPUs are developed by Google specifically for AI workloads. And so far, Anthropic, OpenAI and Meta have reportedly made deals for Google's TPUs.Christopher Miller, historian at Tufts University and author of "Chip War: The Fight for the World's Most Critical Technology," explains what this could mean.
This week we're taking stock of conversation trends to let it rip on AI market jitters and what happens when the math stops math-ing.We start with the numbers that have investors nervy: Amazon's $200 billion capex projection for 2026, and the uncomfortable reality of building an entire economy on depreciating GPU infrastructure with a three-year shelf life. Why the dot-com bubble comparison are incomplete, and questioning what happens when billions flow into overwhelming into transformer model architecture while research into others starves.Then we shift from market corrections to attention economics, unpacking how AI tools promise productivity while actually training us to outsource thinking itself. The cost is both financial and experiential. When was the last time you sat alone without reaching for your phone? Can you still read sentences that run four lines long?The episode lands on an uncomfortable question about who gets to have unmediated experiences anymore, and whether we're living our own lives or just consuming other people's.Mentioned: Ed Zitron 's “Better Offline” podcast Derek Thompson's Plain English podcast interview with Paul Kedrosky on market conditions and signs of a bubble Stephen Colbert on “truthiness” Enshittification, coined by Cory Doctorow MIT on the philosophical puzzle of AI Netflix's main competition is sleep Point of view: Gen Z will remember more of other people's memories than their own Blaise Pascal writing about attention in 1670
Shownotes - https://billfairchild.notion.site/NNP-134-2fb2150ee84f8074ae68d9e210c897e6https://www.youtube.com/fanthedeckhttps://www.youtube.com/retrogamecorpshttps://www.youtube.com/thephawxhttps://www.youtube.com/nerdnestMain Topics Covered:Estimating the time to beat our Steam libraries (with surprising numbers from Russ, Rich, and Cary)The recent delays and pricing predictions for Valve's Steam Machine and Steam FramePotential release timelines and the importance of timing for hardware in a competitive marketRumors and implications of AMD's GPU pricing strategies amid supply shortagesEpic Games Store's new overhaul and potential integration with next-gen consolesThe Nintendo Partner Showcase: new game announcements including "Orbitals" and "Indiana Jones"Sony's PS5 sales performance and digital sales insightsThe upcoming release of GTA VI and industry expectationsThe future of game libraries, backward compatibility, and digital ownership
Лучший подкаст про игры и технологии снова с вами – Завтракаст и его трое бессменных ведущих Дима, Тимур и Максим. Точнее в этом выпуске Максим не смог присоединиться. Ну ничего, вдвоем ребята рассказали про покупку Apple Q ai, новый Xbox, как у Overwatch цифру 2 отняли, Borderlands 4 и многое другое (куча ИИ-слопа).
David Choi and Conor Moore are CoFounders of Permian Labs, the builders behind USDai.AI infrastructure is projecting trillions of dollars in CapEx spend, but there's a problem: traditional finance can't keep up. Banks move too slow. Private credit funds can't scale. The most important commodity in the world has no liquid debt market.USDai is filling this gap by financing AI infrastructure with GPU-backed loans, offering stablecoin depositors 10-15% APR. David and Conor break down how they're using DeFi rails and tokenization to create liquid debt markets for GPUs, enabling institutional borrowers to access capital and retail users to earn yield on productive AI infrastructure.In this episode, we cover:+ Why trillions in AI CapEx can't get traditional financing+ How USDAI structures loans against GPUs, not businesses+ Why this could become "the interest rate of artificial intelligence"+ Their two-token model: USDai vs. sUSDai------
Recorded February 4, 2026. We also cover the upcoming Steam Machine, sad GPU trends, and the arc of the Arc B770. We've got our review of the Thrustmaster T248R and rapidly dive into AMD's glorious financial success, plus a splash of ARM's Q3 results. Surprise! There are discussions on memory prices, Nvidia's RTX 50 series supply, and the weeks "best" security breaches.Powered by Clippy.Timestamps:0:00 Intro00:25 Patreon01:16 Food with Josh02:36 AMD Financials08:43 Arm Financials11:45 AMD says Steam Machine still on track for early 2026 (until it isn't)13:30 New memory price outlook has DDR5 doubling again in Q114:48 Low VRAM GPUs reportedly 75 percent of NVIDIA Q1 supply16:45 AMD also in the lower VRAM game19:45 Intel Arc B770 is supposedly canceled22:17 Spinning rust lives on25:33 Qualcomm loses chief CPU architect27:09 PCPer (possibly) influences Microsoft to backpedal on AI features!31:31 5GbE is getting more affordable33:44 (In)Security Corner43:32 Gaming Quick Hits47:56 Josh reviews the Thrustmaster T248R55:45 Picks of the Week1:07:56 Outro ★ Support this podcast on Patreon ★
In this Breaking Analysis we squint through the statements made by company managements on this week's earnings calls, including what we see as some assumptions that conflict with our scenario for how the AI buildout will evolve. In particular, while operators are understandably focused on Nvidia's gross margins, we believe observers are underestimating the cost advantages that Nvidia will have relative to GPU alternatives, including those from AMD, Intel and hyperscalers.
Thinking about leaving the console life behind for the modded maps and high-frame rates of DayZ PC? This week, Andy and Dave break down the complex world of hardware for the absolute beginner. We know how daunting the switch can be, so we're simplifying what matters most when building or buying your first gaming rig.From CPU bottlenecks to the importance of an SSD, we explain what you should prioritize to get the smoothest experience in Chernarus and beyond!
Most AI infrastructure today is hitting a breaking point. Marc Austin, CEO of Hedgehog, reveals how open source networking and cloud-native solutions are revolutionizing how enterprises build and operate AI at scale. This episode addresses issues many building AI infrastructure today are facing — expensive proprietary systems, overwhelming complex network configurations, and ways to make on-prem AI infrastructure feel just like the public cloud.We discuss how networking is the hidden bottleneck in scaling GPU clusters and the surprising physics and hardware innovations enabling higher throughput. Marc shares the journey of building Hedgehog, an open source, cloud-native platform designed for AI workloads that bridges the gap between complex hardware and seamless, user-friendly cloud experiences. Marc explains how Hedgehog's software abstracts and automates the networking complexity, making AI infrastructure accessible to enterprises without dedicated networking teams.We break down the future of AI networks, from multi-cloud and hybrid environments to the rise of Neo Clouds and the open source movement transforming enterprise AI infrastructure. If you're a CTO, data scientist, or AI innovator, understanding these network innovations can be your moat. Listen to this episode to see how open source, cloud-native networking, and physical innovation are shaping the AI infrastructure of tomorrow.Podcast LinksWatch: https://www.youtube.com/@alexa_griffithRead: https://alexasinput.substack.com/Listen: https://creators.spotify.com/pod/profile/alexagriffith/More: https://linktr.ee/alexagriffithWebsite: https://alexagriffith.com/LinkedIn: https://www.linkedin.com/in/alexa-griffith/Find out more about the guest at LinkedIn: https://www.linkedin.com/in/austinmarc/Website: https://hedgehog.cloud/Github: https://github.com/githedgehogChapters00:00 Rethinking AI Infrastructure02:49 The Role of Networking in AI05:54 Marc's Journey to Hedgehog08:46 Lessons from Big Companies11:38 Requirements for AI Networks14:48 Advancements in AI Networking17:33 Future Challenges in AI Infrastructure20:46 Creating a Cloud Experience On-Prem23:32 The Shift to Hybrid Multi-Cloud28:10 Evolving AI Infrastructure and Efficiency30:57 AI Workloads and Network Configurations32:41 Zero Touch Lifecycle Management35:12 Support for Hardware Devices35:45 Networking Paradigms and Vendor Lock-in38:42 The Rise of Neo Clouds41:31 Demand for AI Infrastructure43:57 Open Source and Cloud-Native Networking47:27 Challenges of Building a Networking Startup50:46 Proud Accomplishments at Hedgehog52:41 Future Excitement in AI Inference
Deze talkshow wordt mede mogelijk gemaakt door MSI. Alle meningen in deze video zijn onze eigen. MSI heeft inhoudelijk geen inspraak op de content en zien de video net als jullie hier voor het eerst op de site.Ga maar goed zitten voor het startschot van het weekend. Anderhalf uur aan knus gekwebbel over videogames kietelt binnen enkele seconden jouw trommelvliezen. Huey, JJ en Koos zitten in de studio klaar. Ready om een nieuwe editie op te nemen van Einde van de Week Live. De talkshow waarin we elke week het belangrijkste game gerelateerde nieuws met jullie, de kijkers, doornemen. We hebben een fijn lijstje aan topics voor je klaarstaan. Zo discussiëren de drie over de Nintendo Direct die op donderdag werd uitgezonden en die geheel gewijd was aan games van third party publishers. Ter sprake komt ook de nieuwste game van Guerilla, Horizon Hunters Gathering, en het besluit van Blizzard om Overwatch 2 voortaan Overwatch te noemen. Dit alles en meer ga je beleven in de Einde van de Week Live van vrijdag 5 december 2025.Guerrilla spraakmakend met hun nieuwe game ‘Horizon Hunters Gathering'In ander nieuws babbelen de drie vrolijk over de nieuwe beelden van de GTA-kloon Samson, het ontbreken van een Fallout 3 shadow drop en de controverse rond de Definitive Edition van The Division. Plus natuurlijk een dosis wekelijkse malheid in de vorm van Cool of Serieus Uncool. Laat het weekend maar beginnen. Wij zijn er helemaal klaar voor.Krijg 150 euro korting bij aanschaf van de Cyborg 15 gaming laptopMSI zet deze week de Cyborg 15 in de spotlights. Deze laptop heeft namelijk een hele fijne prijs dankzij de BTW-vrije dagen bij MediaMarkt. Onder de befaamde motorkap bevinden zich een Intel Core 7 240H processor, een NVIDIA GeForce RTX 5060 GPU, een 512GB SSD en een 4-zone RGB keyboard. De prestaties kun je afzien op een 15.6” full hd 144 Hz paneel. Deze laptop is hier tijdelijk met 150 euro korting te koop.Scoor kaarten voor het concert van de Wu-Tang Clan in de Ziggo DomeOp maandag 2 maart gaat de Wu-Tang Clan de Ziggo Dome op zijn grondvesten doen schudden. De rapformatie is een absolute favoriet onder de liefhebbers van hiphop bij ons op de redactie. Om hun muziek en teksten, maar ook om hun bezigheden in en rondom videogames. Ze maken bijvoorbeeld onderdeel uit van de fantastische Def Jam reeks en hebben zelf ook games gemaakt. Een daarvan komt binnenkort uit. Wil je bij het concert van de Clan aanwezig zijn, dan kun je hier de kaarten kopen.
AWS Lambda is fantastic for small, stateless code on demand. But when your “function” starts looking like a workflow (retries, backoff, long waits, human approvals, callbacks), classic Lambda patterns can feel like a fight: 15-minute max runtime, no built-in state, and orchestration glue everywhere (Step Functions, queues, schedules, and state you did not want to own). In this episode of AWS Bites, Eoin and Luciano explore AWS Lambda Durable Functions, announced at re:Invent 2025. It's still Lambda (same runtimes and scaling), but with durable execution superpowers: named steps, automatic checkpointing, and the ability to suspend and resume from a safe point without redoing completed work. We unpack the replay/resume model under the hood, when this approach shines, and the gotchas (determinism, idempotency, replay-aware logging, debugging resumed runs). To make it real, we share how we rebuilt PodWhisperer v2 using Durable Functions to orchestrate a GPU-powered WhisperX pipeline, LLM refinement, speaker naming, and caption generation.In this episode, we mentioned the following resources: AWS announcement blog post: https://aws.amazon.com/blogs/aws/build-multi-step-applications-and-ai-workflows-with-aws-lambda-durable-functions/ Durable Functions best practices: https://docs.aws.amazon.com/lambda/latest/dg/durable-best-practices.html The replay model deep dive (Dev.to): https://dev.to/aws/the-replay-model-how-aws-lambda-durable-functions-actually-work-2a79 Build workflows that last (Dev.to): https://dev.to/aws/aws-lambda-durable-functions-build-workflows-that-last-3ac7 Testing Durable Functions in TypeScript (Dev.to): https://dev.to/aws/testing-aws-lambda-durable-functions-in-typescript-5bj2 Developing Durable Functions with AWS SAM (Dev.to): https://dev.to/aws/developing-aws-lambda-durable-functions-with-aws-sam-ga9 Hands-on notes: https://www.andmore.dev/blog/lambda_durable_functions/ PodWhisperer (open source): https://github.com/fourTheorem/podwhisperer/ WhisperX: https://github.com/m-bain/whisperX Do you have any AWS questions you would like us to address?Leave a comment here or connect with us on X/Twitter, BlueSky or LinkedIn:- https://twitter.com/eoins | https://bsky.app/profile/eoin.sh | https://www.linkedin.com/in/eoins/- https://twitter.com/loige | https://bsky.app/profile/loige.co | https://www.linkedin.com/in/lucianomammino/
The AI Breakdown: Daily Artificial Intelligence News and Discussions
January marked a clear break between the AI era people thought they were in and the one that actually arrived. Agentic coding crossed from novelty to default, tools like Claude Code reset expectations for what individuals can build, and systems such as OpenClaw and Moltbook showed how quickly agents are becoming ecosystems, not just features. This episode explains why the shift felt sudden, why it caught so many off guard, and why the real story isn't sentient agents but a widening gap between AI capability and real-world adoption. In the headlines: Nvidia and OpenAI, Intel's GPU pivot, Apple's embrace of agentic coding, developer dependence on Claude Code, and Disney's strategic turn toward experiences.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsRackspace AI Launchpad - Build, test and scale intelligent workloads faster - http://rackspace.com/ailaunchpadZencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflowOptimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybriefAssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefSection - Build an AI workforce at scale - https://www.sectionai.com/LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
Timestamps: 0:00 feeeeeeling hot hot hot! 0:15 Adobe Animate discontinued, then not 2:14 Intel hires GPU veteran... 3:52 Moltbook (last time) + Rent-a-human site 7:22 QUICK BITS INTRO 7:33 Copilot in File Explorer 8:08 France raids X offices, Spain social ban 9:11 MORE Ryzen CPUs fried in ASRock mobos 10:07 AMD adopting Intel's 'FRED' 10:54 GitHub's plan to deal with vibe coding slop NEWS SOURCES: https://lmg.gg/C94rA Learn more about your ad choices. Visit megaphone.fm/adchoices
Apple shatters revenue records, Tim Cook teases new innovations coming this year, Walmart hits $1T market cap, everyone's still pouring money into AI, and OpenClaw's “skills” have serious security concerns.Stephen's Newsletter SignupAd-Free + Bonus EpisodesShow Notes via EmailWatch on YouTube!Join the CommunityEmail Us: podcast@primarytech.fm@stephenrobles on Threads@jasonaten on Threads————————SponsorsShopify: Sign up for your one-dollar-per-month trial and start selling today at: shopify.com/primaryQuo: Try QUO for free PLUS get 20% off your first 6 months when you go to Quo.com/primary————————Links from the showMac Power Users - RelayApple announces all-time record in revenue, iPhone sales – Six ColorsWhile Everyone Else Tries to Replace the iPhone, Apple Just Had Its Best Quarter EverNew Mac configurator may point to separate CPU and GPU options - 9to5MacTim Cook hints at ‘never been seen' innovations coming this year - 9to5MacMeta (META) Q4 2025 earnings185 Billion Reasons Google Isn't Worried AI Will Kill SearchGoogle's subscriptions rise in Q4 as YouTube pulls $60B in yearly revenue | TechCrunchIt Took 64 Years to Build Walmart. It Took 3 Years to Turn It Into a $1 Trillion Tech CompanyXcode moves into agentic coding with deeper OpenAI and Anthropic integrations | TechCrunchOpenClaw's AI ‘skill' extensions are a security nightmare | The VergeHumans are infiltrating the social network for AI bots | The VergeAnthropic's 'Dishonest' Ads Clearly Struck a Nerve With Sam AltmanExpect more upsells and subscription bundles from Apple, Creator Studio was just the start - 9to5MacNow anyone can tap Ring doorbells to search for lost dogs | The VergeAirTag 2 Has Wild Range! #tech #airtag - YouTubeGoogle announces Pixel 10a with completely flat cameraAlexa Plus is now available to everyone in the US | The VergeApple Sports for iPhone updated with PGA, LPGA, and more - 9to5MacThe SpaceX-xAI Merger Isn't About Data Centers in Space. It's About Bailing Out Musk's Biggest GambleShortcuts Team Lead HiringGemini Mac App Tweet ★ Support this podcast ★
NVIDIA stock is plummeting after talks between the GPU manufacturer and OpenAI started to break down. Now everyone is selling off their tech stocks. Has the AI bubble just burst? Have investors been fooled about the capabilities of AI?Watch the podcast episodes on YouTube and all major podcast hosts including Spotify.CLOWNFISH TV is an independent, opinionated news and commentary podcast that covers Entertainment and Tech from a consumer's point of view. We talk about Gaming, Comics, Anime, TV, Movies, Animation and more. Hosted by Kneon and Geeky Sparkles.Get more news, views and reviews on Clownfish TV News - https://more.clownfishtv.com/On YouTube - https://www.youtube.com/c/ClownfishTVOn Spotify - https://open.spotify.com/show/4Tu83D1NcCmh7K1zHIedvgOn Apple Podcasts - https://podcasts.apple.com/us/podcast/clownfish-tv-audio-edition/id1726838629
In today's Tech3 from Moneycontrol, we look at the government expanding Startup India definition to formally include deeptech firms with longer benefit windows. We also track rising fears of a SaaS reset as AI reshapes software, Cognizant's decision to pay full bonuses amid IT services uncertainty, Fractal Analytics delaying its foundation model launch due to a GPU crunch ahead of its IPO, and Alphabet's plan to sharply ramp up capital spending as the global AI race intensifies.
Deze week praten Arnoud Wokke, Jurian Ubachs, Jelle Stuip en Willem de Moor over de geschiedenis van netbooks, gestegen prijzen voor gpu's, het succes van de Nintendo Switch en AI-software OpenClaw. 0:00 Intro0:19 Opening1:09 .post26:27 Liefde voor netbooks36:33 Gpu's stijgen nu ook in prijs47:33 Switch nadert PS2 als succesvolste console55:23 OpenClaw en Moltbook: dystopisch en fascinerend1:06:48 SneakpeekSee omnystudio.com/listener for privacy information.
Big Pharma’s report cards just rewrote the market mood. Market View dives into why Pfizer slipped into a quarterly loss, why Novo Nordisk plunged double digits, and how Merck managed to buck the trend. We also unpack Singtel’s rally on acquisition chatter and what it signals for Singapore’s telco strategy. Across the Pacific, tech stocks drag Wall Street lower as Nvidia and Microsoft weigh on the NASDAQ. Plus, our UP or DOWN game covers PayPal, PepsiCo, Walmart and Intel’s GPU ambitions. All this and more, hosted by Michelle Martin with Ryan Huang.See omnystudio.com/listener for privacy information.
En este segundo episodio de El Garaje de Cupertino, empezamos con un 'pequeño' problema técnico (¡hablamos media hora solos!), pero nos recuperamos para analizar las bombas de la semana: ¿Apple rompiendo con TSMC? ¿Vuelven las gráficas dedicadas al Mac? Y, por supuesto, filtramos todos los detalles del iPhone Plegable. ¡No te pierdas el debate con Ali, Joaquín y Guaica!Síguenos en TELEGRAM!Conviértete en un supporter de este podcast: https://www.spreaker.com/podcast/el-garaje-de-cupertino--3153796/support.
This is episode 3/3 with Alex Merose about his thoughts on weather foundation models. He is a member of the technical staff at Open Athena. In this episode, Alex steps through system characteristics of weather foundation models and how we can approach building them. Toward the end, the episode touches on an example of applying these approaches to a simulation of the Earth's weather over a period of 800 years. The approaches Alex has been talking about enable the use of a GPU to process this simulation in only one day. We conclude with the values that drive Alex's work.
Fidelity announces its FIDD stablecoin. Robinhood plans 24/7 tokenized stock trading. The EF PSE team shares a client-side GPU acceleration roadmap. And Uniswap adds CCAs on its web app. Read more: https://ethdaily.io/871 Sponsor: Arkiv is an Ethereum-aligned data layer for Web3. Arkiv brings the familiar concept of a traditional Web2 database into the Web3 ecosystem. Find out more at Arkiv.network Content is for informational purposes only, not endorsement or investment advice. The accuracy of information is not guaranteed.
On this episode of That Tech Pod, we talk with Logan Lawler, Senior Director at Dell Technologies, about what it takes to make AI actually work in the real world. Logan shares his 16-year journey at Dell and why his focus today is less on hype and more on practical infrastructure choices that enable AI at scale.We break down Edge AI versus Cloud AI with clear, concrete examples, including how GPU-accelerated desktops, workstations, and hybrid cloud setups can turn “that's impossible” AI problems into manageable ones. Logan also highlights why storage, not compute, is often the biggest bottleneck, and the common mistakes organizations make when data can't keep up with GPUs. The conversation gets into energy and sustainability, from the environmental cost of massive data centers to what it means when nuclear power and AI collide. We also explore the human side of AI: whether instant answers are making us lazier, why struggle is still essential for learning, and how that idea shows up in parenting, education, and work. We close with real-world edge AI success stories, a few cautionary tales, and some lighter moments, making this a grounded discussion on AI, infrastructure, and the tradeoffs we rarely talk about.Logan Lawler works at Dell Technologies, where he leads strategy for Dell Pro Precision AI Solutions. Over his 16-year career at Dell, he's worked across sales, marketing, and e-commerce, and now helps enterprises and creative studios leverage high-performance AI workstations and hybrid cloud infrastructure. A frequent speaker and media guest, Logan explains how GPU-accelerated PCs and storage solutions are transforming industries from film and animation to healthcare research. Logan was raised in Missouri and is a graduate of the University of Missouri. He now lives in Texas with his family.
Adi Polak talks to Bryan Oliver (Thoughtworks) about his career in platform engineering and large-scale AI infrastructure. Bryan's first job: building pools and teaching swimming lessons. His challenge: running large-scale GPU data centers while keeping AI workloads predictable and reliable.SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo
Этот выпуск – луч надежды для отчаявшихся геймеров, смирившихся, что все видеокарты сметут ИИ-корпорации. Говорим про чипы, на которых обучение и инференс работают кратно быстрее, чем на GPU. В чем секрет, и чего ожидать в будущем – обсуждаем с Зигфридом Звездиным из Cerebras! Также ждем вас, ваши лайки, репосты и комменты в мессенджерах и соцсетях! Telegram-чат: https://t.me/podlodka Telegram-канал: https://t.me/podlodkanews Страница в Facebook: www.facebook.com/podlodkacast/ Twitter-аккаунт: https://twitter.com/PodcastPodlodka Ведущие в выпуске: Женя Кателла, Егор Толстой Полезные ссылки: Telegram-канал гостя https://t.me/zzigfrid Telegram гостя https://t.me/ziggerzz LinkedIn гостя https://www.linkedin.com/in/zigfrid/
Your host, Sebastian Hassinger, talks with Alumni Ventures managing partner Chris Sklarin about how one of the most active US venture firms is building a quantum portfolio while “democratizing” access to VC as an asset class for individual investors. They dig into Alumni Ventures' co‑investor model, how the firm thinks about quantum hardware, software, and sensing, and why quantum should be viewed as a long‑term platform with near‑term pockets of commercial value. Chris also explains how accredited investors can start seeing quantum deal flow through Alumni Ventures' syndicate.Chris' background and Alumni Ventures in a nutshellChris is an MIT‑trained engineer who spent years in software startups before moving into venture more than 20 years ago.Alumni Ventures is a roughly decade‑old firm focused on “democratizing venture capital” for individual investors, with over 11,000 LPs, more than 1.5 billion dollars raised, and about 1,300 active portfolio companies.The firm has been repeatedly recognized as a highly active VC by CB Insights, PitchBook, Stanford GSB, and Time magazine.How Alumni Ventures structures access for individualsMost investors come in as individuals into LLC‑structured funds rather than traditional GP/LP funds.Alumni Ventures always co‑invests alongside a lead VC, using the lead's conviction, sector expertise, and diligence as a key signal.The platform also offers a syndicate where accredited investors can opt in to see and back individual deals, including those tagged for quantum.Quantum in the Alumni Ventures portfolioAlumni Ventures has 5–6 quantum‑related investments spanning hardware, software, and applications, including Rigetti, Atom Computing, Q‑CTRL, Classiq, and quantum‑error‑mitigation startup Qedma/Cadmus.Rigetti was one of the firm's earliest quantum investments; the team followed on across multiple rounds and was able to return capital to investors after Rigetti's SPAC and a strong period in the public markets.Chris also highlights interest in Cycle Dre (a new company from Rigetti's former CTO) and application‑layer companies like InQ and quantum sensing players.Barbell funding and the “3–5 year” viewChris responds to the now‑familiar “barbell” funding picture in quantum— a few heavily funded players and a long tail of small companies—by emphasizing near‑term revenue over pure science experiments.He sees quantum entering an era where companies must show real products, customers, and revenue, not just qubit counts.Over the next 3–5 years, he expects meaningful commercial traction first in areas like quantum sensing, navigation, and point solutions in chemistry and materials, with full‑blown fault‑tolerant systems further out.Hybrid compute and NVIDIA's signal to the marketChris points to Jensen Huang's GTC 2025 keynote slide on NVIDIA's hybrid quantum–GPU ecosystem, where Alumni Ventures portfolio companies such as Atom Computing, Classiq, and Rigetti appeared.He notes that NVIDIA will not put “science projects” on that slide—those partnerships reflect a view that quantum processors will sit tightly coupled next to GPUs to handle specific workloads.He also mentions a large commercial deal between NVIDIA and Groq (a classical AI chip company in his portfolio) as another sign of a more heterogeneous compute future that quantum will plug into.Where near‑term quantum revenue shows upChris expects early commercial wins in sensing, GPS‑denied navigation, and other narrow but valuable applications before broad “quantum advantage” in general‑purpose computing.Software and middleware players can generate revenue sooner by making today's hardware more stable, more efficient, or easier to program, and by integrating into classical and AI workflows.He stresses that investors love clear revenue paths that fit into the 10‑year life of a typical venture fund.University spin‑outs, clustering, and deal flowAlumni Ventures certainly sees clustering around strong quantum schools like MIT, Harvard, and Yale, but Chris emphasizes that the “alumni angle” is secondary to the quality of the venture deal.Mature tech‑transfer offices and standard Delaware C‑corps mean spinning out quantum IP from universities is now a well‑trodden path.Chris leans heavily on network effects—Alumni Ventures' 800,000‑person network and 1,300‑company CEO base—as a key channel for discovering the most interesting quantum startups.Managing risk in a 100‑hardware‑company worldWith dozens of hardware approaches now in play, Chris uses Alumni Ventures' co‑investor model and lead‑investor diligence as a filter rather than picking purely on physics bets.He looks for teams with credible near‑term commercial pathways and for mechanisms like sensing or middleware that can create value even if fault‑tolerant systems arrive later than hoped.He compares quantum to past enabling waves like nanotech, where the biggest impact often shows up as incremental improvements rather than a single “big bang” moment.Democratizing access to quantum ventureAlumni Ventures allows accredited investors to join its free syndicate, self‑attest accreditation, and then see deal materials—watermarked and under NDA—for individual investments, including quantum.Chris encourages people to think in terms of diversified funds (20–30 deals per fund year) rather than only picking single names in what is a power‑law asset class.He frames quantum as a long‑duration infrastructure play with near‑term pockets of usefulness, where venture can help investors participate in the upside without getting ahead of reality.
Gonka AI is a decentralized network that provides efficient AI computing power by leveraging global GPU resources for tasks like model training and inference. It challenges centralized providers like AWS and Google by using a novel "Proof of Work 2.0" mechanism, where nearly all compute goes to productive AI workloads rather than blockchain security. Guest: David Liberman and Daniil Liberman- Co-founders~This episode is sponsored by Gonka~Website: https://gonka.ai/X: https://x.com/gonka_aiDiscord: https://discord.gg/REcpeYc7P7GitHub: https://github.com/gonka-ai/gonka/pulls00:00 Intro01:00 Gonka's mission02:30 How can a platform use Gonka?03:45 Network capacity surge05:45 Scale growth in 18months08:45 Bitcoin of A.I.?11:30 Why decentralization?15:00 Security risks18:30 Value for token holders22:30 AI agents integration25:30 Gonka use cases28:00 Outro#Crypto #AI #cryptocurrency ~Decentralized A.I. At Warp Speed
This week, the hosts go deep on out-of-band updates, unwanted "innovations," and the uneasy cost of tech's latest gold rush. Plus, securing a Microsoft account is not as hard as some think, and neither are passkeys once you get past the jargon. And for developers, AI Dev Gallery offers a fascinating glimpse at what you can do for free with AI used against a CPU, GPU, or NPU. Windows 11 Microsoft issues an emergency fix for a borked Windows Update. Right. A fix for a fix. Hell freezes over, if only slightly: Microsoft quietly made some positive changes to forced OneDrive Folder Backup. Donʼt worry, itʼs still forced (and appears to be opt-in, but isnʼt). But you can back out more elegantly. So itʼs opt-out, not opt-in, but a step forward. Plus, a new behavior Windows 11 on Arm PCs can now download games from the Xbox app (previously only through the Insider program) Over 85 percent of Xbox games on PC work in WOA now Prism emulator now supports AVX and AVX2 and Epic Anti-Cheat, and there is a new Windows Performance Fit feature offering guidance on which titles should play well. Beta: New 25H2 build with account dialog modernization, Click to Do and desktop background improvements. Not for Dev, suggesting itʼs about to move to 26H1 Notepad and Paint get more features yet again. Notably, these updates are for Dev and Canary only, suggesting these might be 26Hx features (then again, versions don't matter, right?) AI Just say no: To AI, to Copilot, and to Satya Nadella Our national nightmare is over: You can now (easily) hide Copilot in Microsoft Edge ChatGPT Go is now available worldwide, ads are on the way because of course Wikipedia partners with Amazon, Meta, Microsoft, more on AI Xbox & gaming January Xbox Update brings Game Sync Indicator, more Solid second half of January for Xbox Game Pass Microsoft will likely introduce a free, ad-supported Xbox Cloud Gaming tier because of course Tips & picks Tip of the week: Secure your Microsoft account App pick of the week: AI Dev Gallery RunAs Radio this week: Ideation to Implementation with Amber Vandenburg Liquor pick of the week: Estancia Raicilla Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit
This week, the hosts go deep on out-of-band updates, unwanted "innovations," and the uneasy cost of tech's latest gold rush. Plus, securing a Microsoft account is not as hard as some think, and neither are passkeys once you get past the jargon. And for developers, AI Dev Gallery offers a fascinating glimpse at what you can do for free with AI used against a CPU, GPU, or NPU. Windows 11 Microsoft issues an emergency fix for a borked Windows Update. Right. A fix for a fix. Hell freezes over, if only slightly: Microsoft quietly made some positive changes to forced OneDrive Folder Backup. Donʼt worry, itʼs still forced (and appears to be opt-in, but isnʼt). But you can back out more elegantly. So itʼs opt-out, not opt-in, but a step forward. Plus, a new behavior Windows 11 on Arm PCs can now download games from the Xbox app (previously only through the Insider program) Over 85 percent of Xbox games on PC work in WOA now Prism emulator now supports AVX and AVX2 and Epic Anti-Cheat, and there is a new Windows Performance Fit feature offering guidance on which titles should play well. Beta: New 25H2 build with account dialog modernization, Click to Do and desktop background improvements. Not for Dev, suggesting itʼs about to move to 26H1 Notepad and Paint get more features yet again. Notably, these updates are for Dev and Canary only, suggesting these might be 26Hx features (then again, versions don't matter, right?) AI Just say no: To AI, to Copilot, and to Satya Nadella Our national nightmare is over: You can now (easily) hide Copilot in Microsoft Edge ChatGPT Go is now available worldwide, ads are on the way because of course Wikipedia partners with Amazon, Meta, Microsoft, more on AI Xbox & gaming January Xbox Update brings Game Sync Indicator, more Solid second half of January for Xbox Game Pass Microsoft will likely introduce a free, ad-supported Xbox Cloud Gaming tier because of course Tips & picks Tip of the week: Secure your Microsoft account App pick of the week: AI Dev Gallery RunAs Radio this week: Ideation to Implementation with Amber Vandenburg Liquor pick of the week: Estancia Raicilla Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit
This week, the hosts go deep on out-of-band updates, unwanted "innovations," and the uneasy cost of tech's latest gold rush. Plus, securing a Microsoft account is not as hard as some think, and neither are passkeys once you get past the jargon. And for developers, AI Dev Gallery offers a fascinating glimpse at what you can do for free with AI used against a CPU, GPU, or NPU. Windows 11 Microsoft issues an emergency fix for a borked Windows Update. Right. A fix for a fix. Hell freezes over, if only slightly: Microsoft quietly made some positive changes to forced OneDrive Folder Backup. Donʼt worry, itʼs still forced (and appears to be opt-in, but isnʼt). But you can back out more elegantly. So itʼs opt-out, not opt-in, but a step forward. Plus, a new behavior Windows 11 on Arm PCs can now download games from the Xbox app (previously only through the Insider program) Over 85 percent of Xbox games on PC work in WOA now Prism emulator now supports AVX and AVX2 and Epic Anti-Cheat, and there is a new Windows Performance Fit feature offering guidance on which titles should play well. Beta: New 25H2 build with account dialog modernization, Click to Do and desktop background improvements. Not for Dev, suggesting itʼs about to move to 26H1 Notepad and Paint get more features yet again. Notably, these updates are for Dev and Canary only, suggesting these might be 26Hx features (then again, versions don't matter, right?) AI Just say no: To AI, to Copilot, and to Satya Nadella Our national nightmare is over: You can now (easily) hide Copilot in Microsoft Edge ChatGPT Go is now available worldwide, ads are on the way because of course Wikipedia partners with Amazon, Meta, Microsoft, more on AI Xbox & gaming January Xbox Update brings Game Sync Indicator, more Solid second half of January for Xbox Game Pass Microsoft will likely introduce a free, ad-supported Xbox Cloud Gaming tier because of course Tips & picks Tip of the week: Secure your Microsoft account App pick of the week: AI Dev Gallery RunAs Radio this week: Ideation to Implementation with Amber Vandenburg Liquor pick of the week: Estancia Raicilla Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit
This week, the hosts go deep on out-of-band updates, unwanted "innovations," and the uneasy cost of tech's latest gold rush. Plus, securing a Microsoft account is not as hard as some think, and neither are passkeys once you get past the jargon. And for developers, AI Dev Gallery offers a fascinating glimpse at what you can do for free with AI used against a CPU, GPU, or NPU. Windows 11 Microsoft issues an emergency fix for a borked Windows Update. Right. A fix for a fix. Hell freezes over, if only slightly: Microsoft quietly made some positive changes to forced OneDrive Folder Backup. Donʼt worry, itʼs still forced (and appears to be opt-in, but isnʼt). But you can back out more elegantly. So itʼs opt-out, not opt-in, but a step forward. Plus, a new behavior Windows 11 on Arm PCs can now download games from the Xbox app (previously only through the Insider program) Over 85 percent of Xbox games on PC work in WOA now Prism emulator now supports AVX and AVX2 and Epic Anti-Cheat, and there is a new Windows Performance Fit feature offering guidance on which titles should play well. Beta: New 25H2 build with account dialog modernization, Click to Do and desktop background improvements. Not for Dev, suggesting itʼs about to move to 26H1 Notepad and Paint get more features yet again. Notably, these updates are for Dev and Canary only, suggesting these might be 26Hx features (then again, versions don't matter, right?) AI Just say no: To AI, to Copilot, and to Satya Nadella Our national nightmare is over: You can now (easily) hide Copilot in Microsoft Edge ChatGPT Go is now available worldwide, ads are on the way because of course Wikipedia partners with Amazon, Meta, Microsoft, more on AI Xbox & gaming January Xbox Update brings Game Sync Indicator, more Solid second half of January for Xbox Game Pass Microsoft will likely introduce a free, ad-supported Xbox Cloud Gaming tier because of course Tips & picks Tip of the week: Secure your Microsoft account App pick of the week: AI Dev Gallery RunAs Radio this week: Ideation to Implementation with Amber Vandenburg Liquor pick of the week: Estancia Raicilla Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit
What happens when the AI race stops being about size and starts being about sense? In this episode of Tech Talks Daily, I sit down with Wade Myers from MythWorx, a company operating quietly while questioning some of the loudest assumptions in artificial intelligence right now. We recorded this conversation during the noise of CES week, when headlines were full of bigger models, more parameters, and ever-growing GPU demand. But instead of chasing scale, this discussion goes in the opposite direction and asks whether brute force intelligence is already running out of road. Wade brings a perspective shaped by years as both a founder and investor, and he explains why today's large language models are starting to collide with real-world limits around power, cost, latency, and sustainability. We talk openly about the hidden tax of GPUs, how adding more compute often feels like piling complexity onto already fragile systems, and why that approach looks increasingly shaky for enterprises dealing with technical debt, energy constraints, and long deployment cycles. What makes this conversation especially interesting is MythWorx's belief that the next phase of AI will look less like prediction engines and more like reasoning systems. Wade walks through how their architecture is modeled closer to human learning, where intelligence is learned once and applied many times, rather than dragging around the full weight of the internet to answer every question. We explore why deterministic answers, audit trails, and explainability matter far more in areas like finance, law, medicine, and defense than clever-sounding responses. There is also a grounded enterprise angle here. We talk about why so many organizations feel uneasy about sending proprietary data into public AI clouds, how private AI deployments are becoming a board-level concern, and why most companies cannot justify building GPU-heavy data centers just to experiment. Wade draws parallels to the early internet and smartphone app eras, reminding us that the playful phase often comes before the practical one, and that disappointment is often a signal of maturation, not failure. We finish by looking ahead. Edge AI, small-footprint models, and architectures that reward efficiency over excess are all on the horizon, and Wade shares what MythWorx is building next, from faster model training to offline AI that can run on devices without constant connectivity. It is a conversation about restraint, reasoning, and realism at a time when hype often crowds out reflection. So if bigger models are no longer the finish line, what should business and technology leaders actually be paying attention to next, and are we ready to rethink what intelligence really means? Useful Links Connect with Wade Myers Learn More About MythWorx Thanks to our sponsors, Alcor, for supporting the show.