POPULARITY
Categories
We just passed the 25th anniversary of the GeForce 3, which felt like a good reason to dust off the April 2001 issue of Maximum PC. We reflect on both a quarter-century of programmable pixel shaders -- the tech that's defined 3D rendering ever since -- and Will's cover story on the new GPU, including the secretive trip to Nvidia to benchmark it, a random Tim Sweeney interview, and more. There's also plenty of other fun retro tech to dish about in here, including super-early home wi-fi devices, the reveal of Windows XP, Pentium 4 RD-RAM weirdness, some classic Gordon Mah Ung hijinks, and more. The Maximum PC issue for this episode: https://archive.org/details/maximum-pc-the-nearly-complete-collection/Maximum%20PC/2001/031%20Maximum%20PC%204-1-2001/page/n1/mode/2up A clip of the Jack Matthews Metroid Prime interview (full interview also on the channel): https://www.youtube.com/watch?v=0oiIm5Ymu6s Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, a monthly bonus episode, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
A Zoomer arrested for stealing $46M from the US Marshals, Kraken makes history with a Fed Master Account, and IREN builds to 150,000 GPUs. Get your tickets to OPNEXT 2026 before prices increase! Join us on April 16 in NYC for technical discussions, investor talks, and intimate conversation with the brightest minds in Bitcoin. Chris Johhansen of Ion Stream and Kaan Farahani of Luxor join us to talk about the insane arrest of John DeGuida for allegedly stealing $46 million from the US Marshals Service. We break down Kraken Financial's historic Fed Master Account and what a "skinny" seat at the table means for the industry. Plus, we analyze the massive pivot from ASICs to GPUs and review the tumultuous Bitcoin hash rate data from February. Subscribe to the newsletter! https://newsletter.blockspacemedia.com Notes: * Zoomer stole $46M from US Marshals Service (his dad!) * Kraken gets first Fed Master Account. * Iren expanding GPU fleet to 150,000. * Difficulty adjustment targeting 7.5% up. Timestamps: 00:00 Start 04:53 Difficulty Report by Hashrate Index 07:49 $46M Stolen from US Marshals Service 15:35 Kraken Financial Granted Federal Reserve Master Account 21:48 AI Compute & Neocloud Dynamics 24:11 AI boom vs crypto boom 27:39 AI inference vs training 30:44 Scoping AI deals 32:41 H100 are still viable? 36:35 Hashrate 37:46 February suprises 44:40 What ASICs are profitable? 45:36 More hashrate declines? 47:36 5 cents per KWH 49:29 Hashrate prediction 52:51 IREN Expands GPU Fleet 1:01:44 Cry Corner: Miners Are Dumping BTC?
In the second of our two-part panel discussion from Morgan Stanley's TMT conference, our analysts break down the complexity of financing AI's infrastructure and the technological disruption happening across industries.Read more insights from Morgan Stanley.----- Transcript -----Michelle Weaver: Welcome back to Thoughts on the Market, and welcome to part two of our conversation live from the Technology, Media and Telecom conference. I'm Michelle Weaver, U.S. Thematic and Equity Strategist at Morgan Stanley. Today we're continuing our conversation with Stephen Byrd, Josh Baer and Lindsay Tyler. This time looking at financing AI and some of the risks to the story. It's Friday, March 6th at 11am in San Francisco. So yesterday we spoke about AI adoption. And while there's a lot of excitement on this theme, there've also been some concerns bubbling up. Lindsay, I want to start with you around financing. That's another critical component of the AI build out. What's your latest on the magnitude of the data center financing gap, and what role [are] credit markets playing here? Lindsay Tyler: Yeah, in partnership with Thematic Research, Stephen and team, and colleagues across fixed income research last summer, we did put out a note, thinking about the data center financing gap, right? So, Stephen and team modeled a $3 trillion global data center CapEx need over a four-year timeframe. So, in partnership with fixed income across asset classes, we thought: okay, how will that really be funded? And we came to the conclusion that the hyperscalers, the high quality hyperscalers, generate a good amount of cash flow, right? So, there's cash from ops that can fund approximately half of that. But then we think that fixed income markets are critical to fund the rest of the funding gap. And really private credit is the leader in that and then aided by corporate credit and also securitized credit. What we've seen since is that yes, private credit has served a role. There is this difference between private credit 1.0, which is more of that middle market direct lending. And then private credit 2.0, which is more ABF – Asset Based Finance or Asset Backed Finance. And what we see there is an interest in leases of hyperscaler tenants, right? We've also seen in the market over the past nine months or so, investment grade bond issuance by hyperscalers. Obviously, a use of cash flow by hyperscalers. We've seen the construction loans with banks and also private credit per reports. We've also seen high yield bond issuance, which is kind of a new trend for construction financing. We've seen ABS and CMBS as well. And then something new that's emerging in focus for investors is more of a chip-backed or compute contract backed financings, like more creative solutions. We're really in early innings of the spend right now. And so, there is this shift. As we start to work through the construction early phases, the next focus is: okay, but what about the chips? And so, I think a big focus is that, you know, chips are more than 50 percent of the spend for if you're looking at a gigawatt site. And it depends what type of chips and kind of what generation. But that's the next leg of this too. So, it's kind of a focus, you know, for 2026. Michelle Weaver: And how do you view balance sheet leverage and financing when you think about hyperscaler debt raising magnitude and timelines? Lindsay Tyler: So just to bring it down to more of a basic level, if you need compute, you really might need two things, right? A powered shell and then the chips. And so, if you're looking for that compute, you could kind of go in three basic ways. You could look to build the shell and kind of build and buy the whole thing. You could lease the shell, from, you know, a developer, maybe a Bitcoin miner too – that is converted to HBC. And then you kind of buy the chips and you put them in yourselves. Or you could lease all the compute; quote unquote lease, it's more of a contract. In terms of the funding, if you're thinking about the cash flows of some of the big companies – think of that as primarily being put towards chip spend. If you're thinking about the construction that's kind of split between cash CapEx but also leases. And so, what we've seen is that there is more than [$]600 billion of un-commenced lease obligations that will commence over the next two to five years, across the big four or five players. And then my equity counterparts estimate around [$]700 billion of cash CapEx that needs this year for some of those players as well. So, these are big numbers. But that's kind of how, at a basic level, they're approaching some of the financing. It's a split approach. Michelle Weaver: And what have you learned around financing the past few days at the conference? Anything incremental to share there? Lindsay Tyler: Sure. Yeah. I think I found confirmation of some key themes here at the conference. The first being that numerous funding buckets are available. That was a big focus of our note last year is that you can kind of look at asset level financing. You can look at public bonds, you can look at some equity. There are these different funding buckets available.The second is that tenant quality matters for construction financing. I think I've seen this more in the markets than maybe at this conference over the past two to three weeks. But that has been a focus of pricing for the deals, but also market depth for the deals. A third confirmation of a key theme was around the neo clouds and also the GPU as a service business models. Thinking about those creative financings, right. Are they thinking about from their compute counterparties? Would they like upfront payments? Might they look to move financing off [the] balance sheet, if they have a very high-quality investment grade rated counterparty? So, there is some of this evolution around those solutions. And then a fourth key theme is just around the credit support. And Stephen has and I have talked about this around some of the Bitcoin miners – is that, you know, there can be these higher quality investment grade players that might look to lend their credit support. Maybe a lease backstop to other players in the ecosystem in order to get a better pricing on construction financing. And we are seeing some press pickup around how that might play out in chip financing down the road too. Michelle Weaver: Mm-hmm. AI driven risk and potential disruption has been a big feature of the price action we've seen year-to-date in this theme. Stephen, what are some asset classes or businesses you see as resistant to some of this disruption? Stephen Byrd: We spend a lot of time thinking about, sort of, asset classes that are resistant to deflation and disruption. And what's interesting is there's actually a handful of economists in the world that are doing remarkable work on this concept. That they would call it the economics of transformative AI. There are three Americans, two Canadians, two Brits, a number of others who are doing really, really interesting work. And essentially what they're looking at is what do economies look like? As we see very powerful AI enter many industries – cause price reductions, deflation… What does that do? They have a lot of interesting takeaways, but one is this idea that the relative value of assets that cannot be deflated by AI goes up. Very simple idea. But think of it this way, I mean, there's only, you know, one principle resort on Kauai. You know, there's a limited amount of metals. And so, what we go through is this list that's gotten a lot of investor attention of resistant asset classes or more of the resistant asset classes that can go up in value. So, there are obvious ones like land, though you have to be a little careful with real estate in the sense that like, office real estate probably wouldn't be where you would go. Nor would you potentially go sort of towards middle income, lower income housing. But more, you know, think of industrial REITs, higher-end real estate. But there are a lot of other categories that are interesting to me. All kinds of infrastructure should be quite resistant, all kinds of critical materials. Metals should do extremely well in this. But then when you go beyond that, it's actually kind of interesting that there; arguably there's a longer list than those classic sort of land and metals examples.Examples here would be compute… Michelle Weaver: Mm-hmm. Stephen Byrd: I thought Jensen put it, well, you know, if there's a limited amount of infrastructure available, you want to put the best compute. And ultimately, in some ways, intelligence becomes the new coin of the realm in the world, right? So, I would want to own the purveyors of intelligence. It could include high-end luxury. It could include unique human experiences. So, I don't know how many of y'all have children who are sort of college age. But my children are college age, and they absolutely hate what they would call AI slop.They want legit human content, and they seek it out. And they absolutely hate it when they see bad copies of human content. And so, I think there is a place in many parts of the economy for unique human experiences, unique human content, and it's interesting to kind of seek out where that might be in the economy. So those would be some examples of resistant assets. Michelle Weaver: Mm-hmm. Josh, software's been at really the center of this AI disruption debate. How would you compare the current pullback in software multiples to prior periods of peak uncertainty? And do you think any of these concerns are valid? Or how are you thinking about that? Josh Baer: Great question. I mean, software multiples on an EV to sales basis are down 30 – 35 percent just from the fall, I will say. And that's overall in the group. A lot of stocks, multiple handfuls, are down 60-70 percent over the last year. And what's being priced in is really peak uncertainty, a lot of fear. And these multiples, now four times sales – takes us all the way back about 10 years to the shift to cloud. And this time in many ways reminds us of that period of peak fear. In this case, what's being priced in is terminal value risk. We talked about this TAM yesterday. But you know, who is going to win that share? How is it divided from a competitive perspective across these model providers? The LLMs with new entrants. Of course, the incumbents. And this other idea of in-housing. Michelle Weaver: Mm-hmm. Josh Baer: So, there's competitive risk, there's business model risk. Are companies going to need to change their pricing models from seat-based to consumption or hybrid. And then last margin risk. Just thinking about the higher input costs and higher capital intensity. And so, you know, all of those fears are being priced in right now. Michelle Weaver: And we, of course though, had a bunch of these companies live with us at the conference. How are they responding to some of these risks? How are they addressing these investor concerns? Josh Baer: Most of the companies here from our coverage are the incumbent software vendors. And I think that the leadership teams did a really nice job coming out and defending their competitive moats and really articulating the story of why they are in a great position to capitalize on the opportunity. And the reasons can vary across different companies. But some of the commonalities are around enterprise grade, trust, security, governance, acceptance from IT organizations.The idea of vibe coding all apps in an organization get squashed when you actually talk to companies and chief information officers. For some companies there's proprietary data moats, network effects. All of that's on top of existing customer relationships. And so, you know, that was the message from the companies that we had. That we're the incumbents. We get to use all of the same innovative AI technology in the same way that all these different competitive buckets do. But we have, you know, that differentiation in that moat. And so, we're in a good place. Michelle Weaver: I want to wrap on a positive note. Stephen, what did you hear at the conference that you're most excited about? Stephen Byrd: I'd say the life sciences. A few investors pointed out that perhaps AI has a PR problem these days. And I do think showing a significant benefit to humanity in terms of improved health outcomes, whether that's just better diagnosis, you know. Away from this event, but I was in India the week before and, you know, AI can have a powerful benefit to the people who suffer the most in terms of providing very powerful medical tools in a distributed manner. So, I'm a big fan there.But you know, in many ways, curing the most challenging diseases plaguing humanity. The kind of problems involved in providing those and developing those cures are perfect for AI. So that, for me – stepping way back – that is by far the most exciting thing. Michelle Weaver: Josh, same to you. What are you most excited about? Josh Baer: From my perspective, it's potentially the turning point for software. The ability to showcase that we are at this inflection point and acceleration. To actually see that it takes time for our software companies to develop new AI technologies. Put that into products that have been tested and proven and go through the enterprise adoption cycle. And that we're at the cusp of more adoption – that's what our survey work says. And to see that inflection, I think can help to rerate this sector. Michelle Weaver: Lindsay, same question for you… Lindsay Tyler: Maybe I'll tie it to markets. I've already had a lot of more conversations with equity investors over the past, how many months? There's a big fixed income focus right now, which is a great, you know, spot and really interesting opportunity in my seat. And there's a lot of interesting structures coming to be right now in the credit space. So, I think it's an exciting time. Michelle Weaver: Lindsay, Stephen, Josh, thank you very much for joining to recap the event and let us know what you learned at the conference. To our audience, thank you for listening here live. And to our audience tuning in, thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen. And share the podcast with a friend or colleague today.
On this week's episode of The MacRumors Show, we discuss Apple's concentrated week of announcements that saw the introduction of 10 new products.The most significant announcement of the week was the MacBook Neo, an all-new entry-level Apple laptop that starts at $599. The MacBook Neo is designed to compete with lower-cost Windows laptops and Chromebooks, while expanding the Mac lineup with a substantially more affordable option.Unlike every other Apple silicon Mac, the MacBook Neo is powered by the A18 Pro chip originally developed for the iPhone 16 Pro, making it the first Mac to use an iPhone-class processor instead of an M-series chip. The machine features a rounded, colorful design available in Silver, Indigo, Blush, and Citrus finishes, with matching keyboards and wallpapers that give it a more playful appearance than Apple's existing notebooks. At 2.7 pounds, it weighs the same as a MacBook Air.It offers a 13-inch Liquid Retina display with uniform, iPad-style bezels rather than a notch, a Magic Keyboard, a mechanical trackpad, two USB-C ports, 8GB of memory, a headphone jack, a 1080p camera, dual mics, dual speakers with Spatial Audio, and a battery life rated for up to 16 hours.Apple also updated several existing devices with modest specification improvements. The iPhone 17e retains the same design and price as the iPhone 16e but adds the A19 chip, MagSafe support, Apple's second-generation C1X modem, and 256GB of base storage.The 11- and 13-inch iPad Air gained the M4 chip, 12GB of RAM, Wi-Fi 7 support via Apple's N1 wireless chip, and the same C1X modem in cellular models. Meanwhile, the 13- and 15-inch MacBook Air were upgraded with the M5 chip and a higher base storage capacity of 512GB, though the removal of the 256GB option increased the starting price to $1,099.At the high end of the Mac lineup, Apple refreshed the 14-inch and 16-inch MacBook Pro models with the new M5 Pro and M5 Max chips, introducing a "Fusion Architecture" that bonds two 3nmdies together into a single processor. These models also gained faster SSD speeds, higher base storage, and Wi-Fi 7 and Bluetooth 6 via the N1 chip. Battery life increased slightly across the lineup, while GPU cores now include dedicated Neural Accelerators intended to improve AI workloads.Apple also expanded its display lineup with a new Studio Display XDR model, replacing the Pro Display XDR. The new model offers a 27-inch 5K mini-LED panel with up to a 120Hz refresh rate, HDR brightness up to 2,000 nits, and Thunderbolt 5 connectivity. The standard Studio Display was updated at the same time with two Thunderbolt 5 ports, improved speakers, and a camera that now supports Desk View, but retains its 60Hz panel and 600-nit brightness.All of the newly announced devices became available to pre-order on Wednesday, March 4, with the entire lineup scheduled to launch and begin arriving to customers on Wednesday, March 11.Get the right life insurance for you, for less, and save more than fifty percent at https://www.selectquote.com/macrumors00:00 - Intro01:17 - iPhone 17e06:42 - M4 iPad Air08:46 - M5 MacBook Air11:53 - Sponsor: SelectQuote13:40 - MacBook Pro: M5 Pro and M5 Max Overview21:30 - Studio Display25:58 - Studio Display XDR38:05 - Introducing the MacBook Neo
Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
In this episode of Technovation, Peter High speaks with Stephen Witt, award-winning journalist and author of The Thinking Machine, which has been named Business Book of the Year by Financial Times. Witt writes about Jensen Huang's improbable journey from near-bankruptcy in the 1990s GPU wars to leading NVIDIA at the center of the AI revolution. Witt unpacks how NVIDIA defeated nearly 70 competitors, why Huang began targeting “zero-billion-dollar markets,” and how CUDA became the backbone of modern AI. Key highlights from the episode: How investing in zero-billion-dollar markets created durable platform advantage The emerging bull and bear cases for NVIDIA in robotics, edge computing, and global competition The strategic lessons NVIDIA extracted from surviving a 70-competitor GPU market Why operating with a constant “near-death” mindset shaped long-term execution discipline
MacBook Neo cuesta 599 dólares y usa chip A18 Pro en apuesta educativa masivaPor Félix Riaño @LocutorCoApple presentó el MacBook Neo como su portátil más económico. Parte desde 599 dólares y 499 dólares para estudiantes. Usa el chip A18 Pro del iPhone y promete hasta 16 horas de batería. Llega con 8 GB de memoria y 256 GB de almacenamiento. La pregunta es: ¿es una ganga real o un anzuelo para entrar al ecosistema?Apple decidió entrar de frente al terreno de los portátiles de 599 dólares. Lo hizo con el nuevo MacBook Neo. Es un equipo de 13 pulgadas, con pantalla Liquid Retina de 2.408 por 1.506 píxeles y brillo de 500 nits. Pesa 1,2 kilogramos y mide 1,27 centímetros de grosor. Tiene dos puertos USB-C, con una diferencia incómoda: uno es USB 3 de hasta 10 gigabits por segundo y el otro es USB 2 de 480 megabits por segundo.El procesador es el A18 Pro, el mismo que usa el iPhone 16 Pro. Viene con 8 GB de memoria unificada y 256 GB de almacenamiento en su versión base. La batería promete hasta 16 horas de reproducción de video y 11 horas de navegación web. El precio oficial es 599 dólares, y con descuento educativo baja a 499 dólares.Apple afirma que es hasta 50 por ciento más rápido en tareas cotidianas que el portátil más vendido con Intel Core Ultra 5, según pruebas con el benchmark Speedometer. Pero aquí viene la pregunta: ¿estamos ante un nuevo estándar de valor o ante un Mac recortado con buen marketing?Un Mac accesible… con recortesApple no solía competir en esta franja. El MacBook Air más reciente con chip M5 parte desde 1.099 dólares. El salto hasta 599 dólares es grande. La diferencia son 500 dólares. Eso cambia el público. Ahora hablamos de estudiantes, familias y personas que antes miraban un Chromebook o un portátil con Windows.El MacBook Neo mantiene el chasis de aluminio. Se siente como un Mac. Viene en colores como Citrus, Blush, Indigo y plata. Esa decisión recuerda al iBook G3 de principios de los años 2000. Apple está enviando un mensaje: este es el Mac juvenil.La pantalla conserva buena resolución y brillo. Tiene cámara de 1080p. Tiene altavoces con Dolby Atmos. Pero empiezan los ajustes: el teclado no tiene retroiluminación. El trackpad es mecánico, no háptico. Solo admite un monitor externo en 4K a 60 hercios. No tiene puerto MagSafe. Y el Touch ID solo aparece en el modelo de 512 GB que cuesta 699 dólares.Apple no está escondiendo que hubo concesiones. Está diciendo que el precio lo justifica. ¿Te parece suficiente?Aquí está el punto delicado. El MacBook Neo usa un chip de iPhone, no un chip de la serie M. Eso rompe la lógica que Apple venía construyendo desde 2020, cuando migró todos sus Mac a Apple Silicon con arquitectura pensada para computadores.El A18 Pro tiene seis núcleos de CPU. Dos de alto rendimiento y cuatro de eficiencia. Tiene cinco núcleos de GPU y soporte para trazado de rayos. En tareas ligeras como navegar, escribir y ver video, va a rendir bien. Pero en edición de video 4K, en modelado 3D o en grandes proyectos de programación, puede quedarse corto frente a un MacBook Air con chip M.Además, los 8 GB de memoria son el límite. No hay opción de 16 GB. En 2026, muchos usuarios ya consideran 8 GB como el mínimo justo. Si abres muchas pestañas, videollamadas y apps al mismo tiempo, vas a notar presión en el sistema.Otro detalle: solo uno de los puertos USB-C es USB 3. El otro es USB 2. Eso significa que puedes conectar un monitor o tener transferencia rápida, pero no todo a la vez con la misma velocidad. Para un equipo pensado para estudiantes, puede ser suficiente. Para alguien que quiere crecer con el equipo, puede sentirse limitado.Entonces surge la duda real: ¿es una puerta de entrada inteligente o una forma de segmentar más el mercado para empujar después al usuario hacia modelos más caros?Apple no improvisó este movimiento. El mercado de portátiles económicos estaba dominado por Chromebook y por equipos Windows de menos de 700 dólares. Muchos de ellos ofrecen buena batería y rendimiento aceptable. Lo que Apple aporta aquí es construcción premium, integración con iPhone y acceso completo a macOS Tahoe.El MacBook Neo permite copiar y pegar entre iPhone y Mac. Permite usar apps del ecosistema. Está preparado para Apple Intelligence. Eso significa que Apple quiere que el usuario joven entre al ecosistema temprano y luego, cuando necesite más potencia, suba a un Air o a un Pro.Desde el punto de vista estratégico, tiene lógica. Desde el punto de vista técnico, hay límites claros. Si eres estudiante que escribe, navega y hace trabajos en la nube, este equipo puede ser suficiente durante varios años. Si eres creador de contenido, diseñador o desarrollador exigente, probablemente vas a necesitar un modelo con chip M y más memoria.El precio de 599 dólares lo convierte en el Mac más accesible de la historia en lanzamiento oficial. Eso cambia la conversación. Pero también redefine qué entendemos por “Mac completo”.La decisión final no es emocional. Es práctica. ¿Qué vas a hacer con él todos los días?El lanzamiento ocurrió junto a otros anuncios como el iPhone 17e y los nuevos MacBook Pro con chip M5 Pro y M5 Max. El contraste es fuerte. Mientras el Neo baja a 599 dólares, el MacBook Pro de 16 pulgadas puede superar los 7.000 dólares en configuraciones altas.El Neo pesa 1,2 kilogramos. Es el mismo peso que el MacBook Air. Su batería es de 36,5 vatios hora. Apple afirma hasta 16 horas de video. Esa cifra suele medirse en condiciones controladas, con brillo moderado y aplicaciones optimizadas. En uso real puede variar.En Reino Unido y la Unión Europea, el cargador no viene incluido en la caja. Solo el cable USB-C. En Estados Unidos sí incluye adaptador de 20 vatios. Ese detalle reduce costos logísticos y ambientales, pero también puede generar molestias.El descuento educativo baja el precio a 499 dólares. Eso lo pone en territorio de iPad Air. Apple está compitiendo contra su propio catálogo. Si alguien duda entre un iPad con teclado y un MacBook Neo, ahora la diferencia es menor.Y algo más: solo soporta un monitor externo. Para quien usa dos pantallas, esto es un límite concreto. No es un detalle menor.Todo esto configura un producto atractivo, pero muy medido. Apple calculó cada concesión.El MacBook Neo abre la puerta de entrada al ecosistema Mac desde 599 dólares. Ofrece buen diseño y rendimiento suficiente para tareas básicas. Tiene límites claros en memoria y puertos. Antes de comprar, revisa qué uso real le vas a dar.Cuéntame qué opinas y sígueme en Flash Diario.Resumen para TikTok (20 palabras)MacBook Neo cuesta 599 dólares, usa chip de iPhone y apunta a estudiantes. Buen precio, pero con límites claros.BibliografíaWallpaperWiredThe TelegraphTechRadarMacRumorsPCMagMacworldCreative BloqConviértete en un supporter de este podcast: https://www.spreaker.com/podcast/flash-diario-de-el-siglo-21-es-hoy--5835407/support.Apoya el Flash Diario y escúchalo sin publicidad en el Club de Supporters.
Artificial intelligence is rapidly transforming the business landscape, redefining how value is created and where human work fits within the new paradigm. Long-standing advice to amass knowledge and out-execute others is now running up against sophisticated AI agents that can process information and perform tasks at speeds and scales unattainable by humans. In this emerging era, Christopher Lochhead's insights point to a critical shift from being a traditional “knowledge worker” to embracing the future as a “creator capitalist.” On this episode, Christopher Lochhead moves over to the guest chair and answer our questions about AI, Creator Capitalists, and the future of work. You're listening to Christopher Lochhead: Follow Your Different. We are the real dialogue podcast for people with a different mind. So get your mind in a different place, and hey ho, let's go. Why the Knowledge Worker Playbook Is Obsolete For decades, success in business hinged on being a master of knowledge and execution. This model rewarded those who reacted effectively, put out fires, and delivered results with established frameworks. However, with AI making information and execution nearly free and instantly accessible, simply reacting and executing is no longer enough. As Christopher Lochhead argues, clinging to this outdated success formula is akin to opening a video rental store in the age of streaming services. Today, the competitive edge lies in moving upstream to activities that AI cannot easily replicate. This means focusing on judgment, unique perspectives, and the ability to define, frame, and solve new problems. Humans cannot out-execute a GPU, but they can out-create one by leveraging skills that remain distinctly human. The Four Capitals of the Creator Capitalist Framework Lochhead's Creator Capitalist concept rests on the mastery and integration of four kinds of capital: intellectual, relationship, reputational, and financial. Intellectual capital emerges from differentiated insights, deep domain expertise, and unique perspectives. Relationship capital is built through genuine connections and trust within your network, while reputational capital is earned through tangible results and reliability, not just self-promotional branding. Bringing these capitals together creates a flywheel that drives lasting success, even as AI commoditizes old sources of value. Financial capital follows as a natural result of delivering value that others find meaningful. Those able to orchestrate these four capitals will build not just AI-resistant careers but ones supercharged by the new opportunities technology presents. Unleashing Human Potential: Adapt, Create, and Lead As AI handles more routine tasks, the future belongs to those who cultivate curiosity, creativity, and critical thinking. These human abilities enable us to ask better questions, generate bold ideas, and envision solutions no algorithm can predict. Lochhead urges professionals to take radical responsibility for their careers and continually seek ways to create net new value. Adapting to this shift means letting go of fear and embracing the opportunity to redefine what it means to be valuable. The most successful individuals and organizations will be those who harness AI as a tool to augment their creative power and lead the way into uncharted territory. The age of the creator capitalist has arrived, and it's time to build the future together. To hear more of Christopher Lochhead’s thoughts on Creator Capitalist and the future of work, download and listen to this episode. Links Want to catch more episode of the AI Agent & Copilot Podcast? You can check them out here: Presented by Cloud Wars | AI Agent and Copilot Podcast | John Siefert LinkedIn | Cloud Wars LinkedIn We hope you enjoyed this episode of Christopher Lochhead: Follow Your Different™! Christopher loves hearing from his listeners. Feel free to email him, connect on Facebook, X (formerly Twitter), Instagram, and subscribe on Apple Podcast / Spotify!
Apple lo ha vuelto a hacer. Pero esta vez no ha sido un "más de lo mismo con mejor nota". El 3 de marzo de 2026 presentó los chips M5 Pro y M5 Max integrados en los nuevos MacBook Pro, y lo que hay dentro es el cambio de arquitectura más importante desde que llegó el M1. No hablamos de más núcleos ni de un proceso de fabricación más fino. Hablamos de repensar desde cero cómo se construye un chip. En este episodio desmontamos la Fusion Architecture pieza a pieza: qué es un die, por qué dividirlo en dos cambia las reglas del juego, qué implica para la disipación térmica, para la fabricación y para el futuro de Apple Silicon. Hablamos de los Neural Accelerators integrados en cada núcleo GPU, del aumento del ancho de banda del Neural Engine, de los 614 GB/s de memoria del M5 Max y de por qué eso importa más que los GHz cuando hablamos de inteligencia artificial en local. Y hacemos la comparativa con NVIDIA que todo el mundo hace pero casi nadie hace bien: CUDA vs MLX, H100 vs M5 Max, datacenter vs mochila. Sin banderas. Con números reales.
Dive into the heart of the AI revolution with Gary Brode from Deep Knowledge Investing. In this episode, we unravel the complex world of the semiconductors that power AI. Nvidia's GPU dominance to ARM-based innovations, Intel and AMD's CPU roles, and the massive energy demands of data centres. Learn about key deals like Nvidia-Meta's collaboration, investment risks in hyperscalers, and opportunities in nuclear energy and uranium. Perfect for investors navigating the AI boom.
Dive into the heart of the AI revolution with Gary Brode from Deep Knowledge Investing. In this episode, we unravel the complex world of the semiconductors that power AI. Nvidia's GPU dominance to ARM-based innovations, Intel and AMD's CPU roles, and the massive energy demands of data centres. Learn about key deals like Nvidia-Meta's collaboration, investment risks in hyperscalers, and opportunities in nuclear energy and uranium. Perfect for investors navigating the AI boom.
OpenAI's Pentagon Backlash, Microsoft's "MicroSlop" Filter, Apple M5 MacBook Pro Price Hikes, and Washington's Microchip Ban Jim Love covers backlash to OpenAI's rapid Pentagon deal announcement, with Sam Altman admitting it looked opportunistic as ChatGPT uninstall rates and one-star reviews spiked while Anthropic's Claude gained installs; OpenAI then revised contract language to state its AI won't intentionally be used for mass domestic surveillance or by agencies like the NSA without separate approval. He also discusses reports that Microsoft's Copilot Discord filtered the term "MicroSlop," prompting user workarounds and a server lockdown that Microsoft said was an anti-spam measure. Apple's new M5 MacBook Pro lineup adds higher default storage, claims faster internal storage and ~20% GPU gains, but raises prices and introduces a pricier Studio Display XDR with optional nano-texture. Finally, Washington State proposes banning mandatory employee microchip implants amid broader workplace surveillance concerns. 00:00 Sponsor Message Meter 00:19 OpenAI Pentagon Backlash 03:08 Microsoft MicroSlop Filter 05:33 Apple M5 MacBook Prices 07:10 Host Rant On Hype 07:34 Washington Microchip Ban 09:29 Wrap Up And Sponsor
Send a textThis week on The Route to Networking podcast, Jamie Maher is joined by Tony Vrushaj, Data Centre Team Leader at IBM, to explore how data centres have shifted from background infrastructure to strategic assets powering AI, cloud, and national ambition.Tony shares his non-linear journey into the industry, from fixing tractors as a child to building data centres in the UK during a period of rapid expansion. His path was shaped by curiosity, certifications such as CCNA and CompTIA, and a willingness to focus intensely on long-term goals rather than short-term comfort. As his career progressed, technical depth became only part of the equation. Mindset, communication, and leadership proved just as critical.The conversation dives into how AI has transformed the data centre landscape. Higher-density racks, GPU-driven environments, and increased revenue per square metre have changed both the financial stakes and the operational risk. What was once a low-cost mistake can now carry seven-figure consequences. Teams are becoming smaller and more cross-functional, with greater emphasis on engineers who can move between hardware, customers, and executive conversations.Energy emerges as the hidden constraint behind AI growth. Tony explains why power availability, cooling, and grid capacity are now strategic considerations, influencing where facilities are built and how governments think about AI sovereignty. This is no longer just a software race. It is an infrastructure and energy race.Looking ahead, Tony offers a bold prediction for the future of data centres: space. With rising energy demands and cooling challenges on Earth, orbital infrastructure may move from theory to experimentation faster than many expect.The episode closes with a quick-fire round covering underrated skills, the difference between certifications and degrees, and the trait that defines great engineers - comfort with uncertainty.Want to stay up to date with new episodes? Follow our LinkedIn page for all the latest podcast updates!Head to: https://www.linkedin.com/company/the-route-to-networking-podcast/Interested in following a similar career path? Why don't you take a look at our jobs page, where you can find your next job opportunity? Head to: www.hamilton-barnes.com/jobs/
What if the real AI race in 2026 isn't about building bigger models, but about where decisions are made, how fast they happen, and whether they deliver measurable value? In this episode, I'm joined by John Bradshaw, Director of Cloud Computing Technology and Strategy at Akamai, to unpack his predictions for the next phase of cloud, AI inference, and the economics that will shape enterprise technology over the next 12 months. As organizations move beyond experimentation, John explains why the boardroom conversation has shifted from capability to return on investment, and how spiraling compute demands are forcing leaders to rethink the balance between performance, cost, and innovation. We explore why this new financial scrutiny is not slowing AI adoption, but refining it. John shares how inefficient GPU workflows, centralized inference, and poorly aligned architectures are being challenged by a more disciplined approach that pushes intelligence closer to the edge. This shift is not only about latency and performance. It is about building scalable, value-driven platforms that can support real-time decision-making, agentic workloads, and global user experiences without breaking traditional IT budgets. Trust is another major theme throughout our conversation. From the rise of everyday AI agents that quietly handle routine tasks to the growing importance of secure, resilient inference pipelines, John outlines how low-latency edge infrastructure, local processing, and hybrid cloud models will redefine reliability for both enterprises and consumers. We also discuss the smart home backlash following recent outages, and why the next generation of connected products will be designed to work even when the network does not. The episode also looks at the future of streaming, where consolidation, intelligent content delivery, and AI-driven personalization are reshaping both the user experience and the economics behind the platforms. Behind the scenes, orchestration is emerging as a defining capability, with multiple models and services working together to validate outputs, reduce hallucinations, and create more dependable AI systems. This is a conversation about moving from possibility to production, from experimentation to accountability, and from centralized architectures to distributed intelligence. So as AI becomes embedded in every workflow and every customer interaction, will the winners be the companies with the biggest models, or the ones that know exactly where their AI should live, how it should be orchestrated, and how it proves its value every single day?
Pausing a product roadmap for an entire month to point 700 engineers at a single goal is a significant structural shift, but it transformed monday.com. Andrew sits down with VP of R&D Sergei Liakhovetsky to uncover how fixing core infrastructure and adopting a cell-based architecture paved the way for platform scale. Sergei details the exact framework his leadership team used during their 30-day pause to launch user solutions while maintaining a strict zero-bureaucracy policy. The conversation also explores the new realities of reliability as platforms transition from being CPU-bound to heavily GPU-bound under the weight of automated agents.Follow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest:monday magic: A tool for generating initial work solutions and boards using simple prompts.monday vibe: An app builder that allows users to create custom applications on top of the monday.com platform.Sidekick: The horizontal AI assistant/copilot that works across the entire platform to help with tasks like data management and content generation.Agent Factory: A platform for building vertical, specialized agents that can handle specific workflows and roles.Connect with Sergei Liakhovetsky on LinkedInOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
What happens when the cost of intelligence drops to zero? The only thing that matters is knowing how to give the right instructions.In this episode, Andreas Bachmann – co-founder of Adacor, a managed cloud and critical infrastructure provider serving banks, automotive, healthcare, and energy clients across Germany – shares what 22 years of deliberate, founder-led growth actually looks like. We explore the real tension between innovation and zero-tolerance uptime, the co-founder crisis that almost broke the company, and why Andreas believes the primary job of every knowledge worker in five years won't be doing the work – it'll be managing the agents doing it for them.What You'll Discover:[00:01:19] Innovating When Failure Is Not an Option → How Adacor runs experiments for critical infrastructure clients who can't afford a single hiccup – and the mental model that makes it work[00:05:30] The Sustainable Growth Playbook → Why Andreas chose deliberate, step-by-step growth over hypergrowth – and how that decision made Adacor more competitive, not less[00:13:49] The Co-Founder Crisis Nobody Talks About → At 40–50 people, Adacor fractured into silos and the founding team needed “marriage counseling” – what they decided, and who stepped back[00:17:34] Self-Organization Without Chaos → How Adacor implemented OKRs, dailies, and retrospectives in a high-stakes environment – and the one thing that makes retros actually stick[00:23:37] Building a Human-Centered Tech Company → From family compatibility programs to volunteer firefighter support – why Andreas treats the company as the strong one, not the individual[00:27:26] The AI Question: Bullshit or Real? → Why Andreas went all-in on AI in 2022, how Adacor hacked EU innovation grants to build an AI team years early, and why he skipped the GPU commodity race entirely[00:34:16] The Future of Work Is Managing Agents → Andreas's thesis on what happens when intelligence is automated and essentially free – and what human value actually looks like on the other sideKey Takeaways:Sustainable growth is a competitive advantage in high-trust industries – adding people too fast breaks the thing clients pay you for“Fast fashion software”: non-developers are already using AI to write and discard code; this is a glimpse of where all knowledge work is headedThe best retros are useless without a committed “what do we do about it now?” – every retrospective at ATCO must produce 1–3 actionable initiativesThe co-founder transition from parallel silos to one clear direction is one of the most underreported breaking points in company buildingThe new leadership superpower isn't having all the answers – it's knowing when to step back and trust the people who doAbout Andreas Bachmann:Andreas is co-founder and CEO of Adacor, a German managed cloud and critical infrastructure company he's been building for over 22 years with a deliberate focus on stability, human-centered culture, and innovation that doesn't break things. He's also a founding force behind Media Monster, an initiative supporting mental health and work-family compatibility in tech.
QuitGPT Claims Surge, NVIDIA's Vera Rubin 10x Efficiency, Remote Work Pay Premium & Brain Cells Play Doom | Hashtag Trending Jim Love covers claims from QuitGPT.org that 1.5 million people have taken action against ChatGPT, noting the figure mixes signups, shares, and cancellations and that substantiated numbers remain unclear amid negative OpenAI headlines and a possible rise in interest in Anthropic's Claude, which hit #1 on the Apple App Store and saw an outage from "unprecedented demand." NVIDIA announces its next AI platform, Vera Rubin, claiming 10x performance per watt over Grace Blackwell, higher NVLink bandwidth, and a rack-scale 72-GPU/36-CPU system aimed at lowering energy per inference and defending market leadership. A French study finds remote/hybrid workers earn about 12% more (about 6% after controls). Researchers also taught lab-grown human neurons on a chip to play Doom via electrical feedback. Apple updates iPad Air with the M4 chip, and a developer describes being locked out of a premium Google AI account with no clear human support escalation. 00:00 Sponsor Message 00:21 Today's Headlines 01:00 QuitGPT Backlash 04:28 Nvidia Vera Rubin 07:12 Remote Work Pay Premium 09:13 Brain Cells Play Doom 11:02 M4 iPad Air Update 11:35 Locked Out of AI Account 13:25 Wrap Up and Sponsor Thanks
In today's Tech3 from Moneycontrol, we track how airspace disruptions in West Asia are unsettling Indian founders and investors operating through Dubai. We unpack fresh pressure points for the IndiaAI Mission amid global GPU supply dynamics, take you inside IIT Madras' ambitious 400-brain mapping project, and examine Rapido's entry into food delivery with its zero-commission app Ownly in Bengaluru.
In today's Tech3 from Moneycontrol, we track how airspace disruptions in West Asia are unsettling Indian founders and investors operating through Dubai. We unpack fresh pressure points for the IndiaAI Mission amid global GPU supply dynamics, take you inside IIT Madras' ambitious 400-brain mapping project, and examine Rapido's entry into food delivery with its zero-commission app Ownly in Bengaluru.
NVIDIA validated ST's 12 kW power delivery proof-of-concept board, effectively moving into production testing. The GPU maker is even showcasing a complete prototype designed by ST at OCP 2025, the summit organized by the Open Compute Project.
Vandaag aan tafel:Barend BaarssenKarel van der WoudeRuben van der ZwaanPaul HaggTimeline:0:00 Intro0:29 Introducties + wie is MavenBlue2:26 Wat heeft MavenBlue in de IBM Cloud gebouwd?7:00 Waarom IBM Cloud?7:54 Pizzadozen vs Serverless Fleets11:27 Serverless Fleets voordelen17:32 Wat doet MavenBlue nog meer20:45 Cloud Opslag24:10 Cloud BeveiligingShownotes:We duiken in de wereld van extreem rekenintensieve software met Ruben van der Zwaan (Application Owner van de Economische Scenario Generator) en Paul Hagg (eindverantwoordelijk voor alle software en cloud‑hosting) van MavenBlue.Barend ontmoette Paul vorig jaar op IBM TechXchange in Orlando, waar MavenBlue in een demo liet zien hoe zij hun Economische Scenario Generator hebben gebouwd op de IBM Cloud. Wat Barend toen zag? Een oplossing die slim en dynamisch omgaat met CPU's en GPU's, waardoor duizenden tot miljoenen scenario's tegelijkertijd kunnen worden doorgerekend. Geen wachttijden meer — gewoon pure, schaalbare rekenkracht.MavenBlue begon in 2017 en verhuisden in 2020 de “pizzadozen” naar de cloud, waar ze nog steeds een cruciale rol spelen — maar dan volledig virtueel. Moderne uitdagingen, zoals piekbelasting aan het begin van elke maand, worden nu opgelost met Serverless Fleets, waardoor hardware alleen wordt ingeschakeld wanneer het écht nodig is. GPUs worden automatisch opgeschaald, draaien volle bak, en verdwijnen weer zodra de klus klaar is. Efficiënt, snel en kostenbewust.Met hun Docker‑gebaseerde aanpak houdt IBM alle infrastructuur “saai”, zodat MavenBlue zich volledig kan focussen op wat ze het beste doen: hoogwaardige, parallelle berekeningen bouwen voor de verzekeringswereld. En ondertussen werken ze verder aan hun cloud‑storage‑architectuur om ook daar de volgende stap te zetten.Het thema van onze podcast – “Of Je Stopt De Stekker Er In” – past deze aflevering misschien wel beter dan ooit. Bij MavenBlue gaat de virtuele stekker er alleen in wanneer de rekencapaciteit nodig is. En zodra het werk gedaan is? Dan gaat die er net zo snel weer uit.Links:MavenBlue: https://mavenblue.comEconomic Scenario Generation: https://mavenblue.com/solutions-esg/LinkedIn Paul Hagg: https://www.linkedin.com/in/paul-hagg-7b8bbb1/LinkedIn Ruben van der Zwaan: https://www.linkedin.com/in/ruben-van-der-zwaan-19017295/Op- en aanmerkingen kunnen gestuurd worden naar: ofjestoptdestekkererin@nl.ibm.com
Dan Nathan and Guy Adami cover PPI, upcoming earnings, and this week's jobs report. They focus on mounting stress in the AI infrastructure and financing complex: CoreWeave's post-earnings drop, heavy customer concentration, funding challenges, and Jim Chanos' critique that its GPU-leasing model loses money and shows distress-level liquidity, alongside declines in Apollo, KKR, Blackstone, and banks. They contrast Nvidia's strong quarter and 60% growth outlook with stock stagnation, discuss Broadcom as a key AI barometer, and note ongoing software multiple and margin compression highlighted by volatile moves in Workday and Salesforce. Despite rising VIX swings, falling 10-year yields, and consumer-credit concerns signaled by AmEx, Capital One, Klarna, and Walmart trade-down commentary, the S&P remains near highs; they also discuss crude's rebound amid Middle East tensions and Bitcoin weakness pressuring MicroStrategy. After the break, Jen & Kristen join Dan and Guy live from the iConnections Global Alts conference in Miami to unpack an “AI panic” market day, why higher productivity could mean higher rates, and what private credit hiccups really signal for hedge funds and alts. They also explain how The Wall Street Skinny is turning arcane finance jargon into plain English for everyone from college students to the C‑suite, plus why there are no dumb questions when it comes to bonds, credit, and careers on Wall Street. Timecodes 0:00 - Intro 2:00 - CoreWeave & The Software Slide 17:30 - VIX, SPX & The Consumer 25:00 - Yields & Crude 28:30 - Bitcoin & Broader Market 33:20 - He Said, She Said
I am thrilled to welcome Marenza Altieri Douglas, an executive in sales and technology. She's trained in structured enterprise environments, start ups, and is steeped in opening new markets and building commercial enterprise. That's not going to be our focus today, instead we talk about how she is an incredible storyteller, rooted in concepts like disruption and cultivation. Her personal story is key to the narrative, and I was thrilled she is joining us to share that story and how she ties it all together, leading and operating in the current business climate. Marenza Altieri Douglas' career sits at the intersection of technology evangelism and disciplined execution. Trained in structured, enterprise environments and refined in startups and scale-ups, she specializes in defining strategic direction, opening new markets, and building compelling commercial propositions for enterprise and C-suite customers across Fortune 500 and Global 5000 organizations. She has worked across and alongside technologies including Conversational and Generative AI, APIs, DevOps, open-source platforms, cloud and containerized architectures, enterprise mobility, security, communications, media and broadcast, telecoms, and digital platforms. AI is a natural evolution of this journey, alongside a strong strategic interest in GPU-enabled infrastructure and quantum technologies. Marenza is known for building high-trust relationships, spotting and growing talent, and connecting product, engineering, and commercial teams around clear outcomes. A natural storyteller and facilitator, I enjoy shaping narratives that help organizations and customers understand why a technology matters, not just what it does.(4:50) We delve into Marenza's formative years that put her on her current path. She shares her personal and professional story. (17:18) When did Marenza realized that “disruption” and challenging things become a part of her brand? (22:38) What does Marenza feel are some of the important qualities that people should embody? (28:20) Marenza shares how she focuses on the future and the next generation. (39:16) We reflect on what Marenza would like her impact to be over the next couple of years.Connect with Marenza Altieri-Douglashttps://www.linkedin.com/in/marenza/ Subscribe: Warriors At Work PodcastsWebsite: https://jeaniecoomber.comFacebook: https://www.facebook.com/groups/986666321719033/Instagram: https://www.instagram.com/jeanie_coomber/Twitter: https://twitter.com/jeanie_coomberLinkedIn: https://www.linkedin.com/in/jeanie-coomber-90973b4/YouTube: https://www.youtube.com/channel/UCbMZ2HyNNyPoeCSqKClBC_w
This week on The GovNavigators Show, Robert Shea and Adam Hughes sit down with Deep Grewal, Vice President of Public Sector at MinIO, to unpack the findings of a new survey on the federal government's AI readiness, and why so many agencies are still stuck in the pilot phase.While AI ambition is everywhere, Deep explains that the real bottleneck is in data management. From lineage and governance to infrastructure, portability, and total cost of ownership, the conversation makes the case that the unglamorous foundational work will determine which agencies actually scale AI and which remain in perpetual experimentation.They dig into the tension between cloud-first and cloud-smart, the rise of hybrid and sovereign architectures, the GPU and storage crunch, and why AI must become a mission-wide capability rather than a bolt-on “innovation project.” Deep also lays out a practical checklist for moving to enterprise AI: get your data house in order, modernize infrastructure, upskill the workforce, establish governance, and prove the ROI.If you're trying to move from AI pilots to real production, this episode is your roadmap. Show Notes:MinIO's Federal AI Readiness GapAnthropic's stand-offOne man's big bet against DOGEWhat's on the GovNavigators' Radar:Mar 4, 2026Alliance for Digital Innovation's Understanding OneGov: Discussions with GSA LeadershipMar 5, 2026The MUST ATTEND Driving Government Efficiency SummitMar 11, 2026 Data Foundation event on Treasury's Do Not PayMar 19, 2026 RSM Webinar: AI Governance and Responsible Adoption in Government
AI funding rounds are getting bigger. Infrastructure bets are getting steeper. And the SaaS model is back under pressure. On episode 294 of The Six Five Pod, Patrick Moorhead and Daniel Newman break down the $110B OpenAI raise, Amazon's expanded role, AMD's $100B Meta deal, sovereign cloud momentum, and whether or not the SaaS premium is being permanently eroded. The handpicked topics for this week are: OpenAI's $110B Funding Round & Amazon's $50B Commitment: OpenAI secured a $110B round backed by Amazon, NVIDIA, and SoftBank. Amazon committed $50B over eight years, including Tranium capacity, co-development, Bedrock integration, and custom model initiatives. Microsoft remains the exclusive API cloud provider, but the competitive cloud dynamics are shifting. Anthropic, the Pentagon & the AI Safety Line: Anthropic risks a $200M DoD contract over refusing to drop safety restrictions related to mass surveillance and automated weapons. Pat and Dan explore the ethics and competitive positioning of this, and what happens if another lab steps in. Model Distillation & IP Risk: Anthropic cited 24,000 fraudulent accounts generating 16 million interactions to distill model capabilities. The episode examines IP theft, enforcement gaps, and global competition. DeepSeek & NVIDIA Blackwell Reports: Recent reports suggest DeepSeek leveraged NVIDIA Blackwell chips. The hosts discuss export controls, enforcement realities, and whether this was ever realistically in doubt. Microsoft Sovereign Cloud Goes GA: Microsoft introduced full-stack Azure sovereign cloud capabilities with support for disconnected operations. Sovereignty, regulatory compliance, and latency management are becoming core enterprise and government requirements. AMD's $100B Meta AI Infrastructure Deal: AMD secured a massive multi-gigawatt inference-focused deal with Meta using MI450. The discussion centers on competitive dynamics with NVIDIA, scale-up architecture, and whether AMD can materially shift market share. Intel & SambaNova Alignment: Intel Capital invested in SambaNova's Series E. The hosts examine inference strategy, CPU resurgence, and how Intel rounds out its AI positioning while advancing its GPU roadmap. The Flip: Is SaaS Permanently Repriced? Are enterprise SaaS multiples structurally resetting due to AI agents and consumption models, or is the market misreading enterprise AI adoption speed? Nuance emerges around consolidation, consumption pricing, and the durability of complex enterprise platforms. Bulls & Bears: NVIDIA, Salesforce, Synopsys, Dell, Snowflake, IBM, Everpure, HP Strong earnings across several big tech companies met with mixed market reactions. Terminal value concerns, consumption transitions, stock-based compensation, and memory constraints shape sentiment more than raw performance. For a deeper dive into each topic, subscribe to The Six Five Pod so you never miss an episode.
On the latest Blockspace roundup, the gang cover's Block's 40% workforce reduction and our scoop that Magic Eden is quitting the Bitcoin and Ethereum NFT game. Get your tickets to OPNEXT 2026 before prices increase! Join us on April 16 in NYC for technical discussions, investor talks, and intimate conversation with the brightest minds in Bitcoin. Welcome back to The Blockspace Podcast! Today, Charlie and Colin cover the Block's 40% workforce reduction and why the stock ripped 20% on the news. We also dive into the bitcoin mining conditions that are driving hashprice to all-time lows, Blockspace's scoop that Magic Eden is sunsetting its Bitcoin Ordinals marketplace, MARA's latest AI partnership, and the Terra/Luna lawsuit against Jane Street. Plus, Luxor's Michael San Miguel joins the show to discuss the ins and outs of the GPU market. Subscribe to the newsletter! https://newsletter.blockspacemedia.com Notes: * Block laid off 40% of its 10,000 employees. * Block stock surged 20% after the layoff news. * Bitcoin hash price hit an all-time low of $28. * Bitcoin difficulty adjusted upward by 14.73%. * Magic Eden is shutting down BTC and ETH marketplaces, multi-chain wallet * Bitdeer sold all its bitcoin; Cipher plans to sell its bitcoin in 2026 * MARA forms partnership with data center developer Starwood Timestamps: 00:00 Start 03:33 Hashrate update via Luxor's Hashrate Index 09:29 Block lays off 40% of staff 16:37 Magic Eden shutting down 25:54 GPUs & compute 28:03 GPU vs ASIC complexity 29:04 Upgrading hardware 32:16 Finding a compute buyer 34:00 Powershell vs Neocloud 37:12 Compute still in price discovery mode 42:05 MARA earnings 45:20 CIPHER dumping bags 48:44 Jane Street is the new boogyman 59:34 Everyone's short MSTR
A Note from James:In the last episode, we talked about whether Martin Shkreli really deserves the label “most hated man in America.” My conclusion was no, and I hope you came to the same conclusion after hearing his perspective.In this episode, we shift gears completely. We talk about Bitcoin, crypto, AI, energy, optical computing, and what the future of technology might actually look like.Martin has a very unusual combination of skills—finance, biotech, programming—and I always enjoy hearing how he connects ideas across different fields. That's what this conversation is about.Episode Description:What happens when AI demand collides with the limits of computing power and energy?In Part 2, Martin Shkreli and James explore the future of technology—from crypto vulnerabilities to optical computing, GPU scaling, and the potential energy crisis driven by artificial intelligence.They discuss whether Bitcoin can survive quantum computing, why stablecoins solve real-world financial problems, and how computing architecture may shift beyond traditional silicon chips. The conversation then moves into AI economics: why companies might spend billions on compute to make better decisions, how energy constraints could shape innovation, and why optical computing could become the next major breakthrough.This episode isn't about controversy—it's about technological leverage, incentives, and where computation is heading next.What You'll Learn:Why quantum computing could eventually threaten Bitcoin's encryptionThe real-world advantages of stablecoins and decentralized paymentsHow AI demand could create massive new energy constraintsWhy optical (photonic) computing may outperform traditional silicon chipsHow businesses might use large-scale AI compute for strategic decisionsTimestamped Chapters:[00:02:00] Bitcoin, Encryption & Quantum Computing Risks[00:03:02] A Note from James[00:03:34] Crypto Markets: Speculation vs. Utility[00:05:23] Banking Control, Debanking & Stablecoins[00:07:40] Moore's Law, Huang's Law & The Limits of Silicon[00:08:45] Optical Computing Explained[00:09:12] NVIDIA, Parallelization & Power Consumption[00:10:24] Energy Constraints & The Electrical Grid[00:11:41] AI Energy Demand vs. Countries[00:12:24] Corporate AI Decision-Making at Scale[00:13:37] The Coming Explosion of AI Compute[00:14:20] Energy Efficiency vs. Speed[00:15:17] GPU Efficiency Improvements & Jevons Paradox[00:17:00] Why AI Is Different from Traditional Computing[00:17:47] Optical vs. Quantum vs. DNA Computing[00:18:19] Why Optical Computing Fits AI Perfectly[00:19:28] Precision, Bits & Neural Networks[00:21:24] Error Tolerance in AI Systems[00:22:00] Fiber Optics & Existing Infrastructure[00:23:16] New Computing Paradigms Beyond Silicon[00:24:00] Matrix Multiplication & AI Workloads[00:24:53] Closing ThoughtsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
AI is not just getting smarter. It is getting faster by learning how to optimize the hardware it runs on. In this episode, Sharon Zhou, VP of AI at AMD and former Stanford AI researcher, explains how language models are beginning to write and optimize their own GPU kernel code. We explore what self improving AI actually means, how reinforcement learning is used in post training, and why kernel optimization could be one of the most overlooked scaling levers in modern AI. Sharon breaks down how GPU efficiency impacts the cost of training and inference, why catastrophic forgetting remains a challenge in continual learning, and how verifiable rewards from hardware profiling can help models improve themselves. The conversation also dives into compute economics, synthetic data, RLHF, and why infrastructure may define the next phase of AI progress. If you want to understand where AI scaling is really happening beyond bigger models and more data, this episode goes under the hood. Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Preview and Intro (00:25) Sharon Zhou's Background and Transition to AMD (02:00) What Is Self-Improving AI? (04:16) What Is a GPU Kernel and Why It Matters (07:01) Using AI Agents and Evolutionary Strategies to Write Kernels (11:31) Just-In-Time Optimization and Continual Learning (13:59) Self-Improving AI at the Infrastructure Layer (16:15) Synthetic Data and Models Generating Their Own Training Data (20:48) AMD's AI Strategy: Research Meets Product (23:22) Inside the NeurIPS Tutorial on AI-Generated Kernels (30:59) Reinforcement Learning Beyond RLHF (39:09) 10x Faster Kernels vs 10x More Compute (41:50) Will Efficiency Reduce Chip Demand? (42:18) Beyond Language Models: Diffusion, JEPA, and Robotics (45:34) Educating the Next Generation of AI Builders
Thu, 26 Feb 2026 21:45:00 GMT http://relay.fm/connected/592 http://relay.fm/connected/592 The Rickies (March 2026) 592 Federico Viticci, Stephen Hackett, and Myke Hurley Apple is hosting a mysterious media experience next week, and in anticipation of new products, Stephen, Myke, and Federico make predictions about what is coming. Apple is hosting a mysterious media experience next week, and in anticipation of new products, Stephen, Myke, and Federico make predictions about what is coming. clean 3983 Subtitle: Lil' ChippyApple is hosting a mysterious media experience next week, and in anticipation of new products, Stephen, Myke, and Federico make predictions about what is coming. This episode of Connected is sponsored by: Insta360: Introducing the Insta360 Wave and the Link 2 Pro. Sentry: Mobile crash reporting and app monitoring. New users get $100 in Sentry credits with code connected26. Squarespace: Save 10% off your first purchase of a website or domain using code CONNECTED. Links and Show Notes: Get Connected Pro: Preshow, postshow, no ads. Submit Feedback Apple in 2025: The Six Colors report card – Six Colors Six Colors' Apple in 2025 Report Card - MacStories My Full Responses for the 2025 Six Colors Report Card - 512 Pixels Upgrade #604: The Shifting Sands of Liquid Glass - Relay Samsung Galaxy S26/Ultra Impressions: 1 Crazy Display Feature! - MKBHD - YouTube Samsung Galaxy Unpacked 2026 in 12 minutes - The Verge - YouTube Introducing Perplexity Computer 2026 March Keynote Rickies – Rickies.net Keynote Rickies, March 2026 – Rickies.co Wood Blocks | Nintendo The MacBook Air's wedge is truly gone — and I miss it already | The Verge Leaker Says Apple's Lower-Cost MacBook Will Have These 8 Limitations - MacRumors M5 Pro chip could separate CPU and GPU in 'server grade' chips - 9to5Mac 2.5D integrated circuit - Wikipedia Apple Reportedly Agrees to 100% Price Hike on Samsung Memory Chips - MacRumors New ‘F1: Drive to Survive' season is coming to Apple TV - 9to5Mac Apple TV reveals new space-race thriller series is coming soon - 9to5Mac
BrainChip CEO Sean Hehir joins me to unpack where artificial intelligence is actually headed—and why the dominant “everything in the data center” narrative is incomplete.Most AI conversations fixate on massive models, GPU farms, and trillion-dollar infrastructure bets. This episode shifts the frame. Sean and I explore the structural reality that power consumption, latency, and grid constraints are forcing AI to decentralize—and what that means for founders, engineers, and the broader economy.Sean explains how neuromorphic computing and ultra-low-power silicon enable AI inference outside the data center—inside wearables, medical devices, drones, manufacturing systems, and even space applications. We examine why CPUs and GPUs aren't optimized for edge workloads, how custom silicon changes the economics, and why power efficiency isn't a side issue—it's the bottleneck that determines what scales.The conversation expands into workforce displacement, labor fluidity, productivity cycles, and whether technological acceleration inevitably creates unemployment crises—or simply reshuffles value creation again, as history repeatedly shows.This isn't a speculative futurism episode. It's a grounded look at model trends, infrastructure limits, and how companies survive inside a market moving at month-scale rather than decade-scale.The lesson isn't that AI replaces everything.It's that architecture determines outcomes.TL;DR* AI is centralizing in data centers—but it's also rapidly decentralizing to the edge* Power constraints will shape the next phase of AI more than hype cycles* Neuromorphic and event-driven silicon drastically reduce energy per compute* Edge AI enables medical wearables, safety detection, space systems, and industrial automation* Models are getting larger—but optimization techniques will shrink them into smaller form factors* Productivity gains historically displace tasks—not human adaptability* The future isn't about bigger servers—it's about smarter distribution* Lowest power per compute is a strategic advantage, not a marketing lineMemorable Lines* “Don't bet against humanity. We're very creative.”* “The future of AI isn't just in data centers.”* “Power isn't a feature—it's the constraint.”* “If you're the lowest power solution, you will always have customers.”* “Architecture decides what becomes possible.”GuestSean Hehir — CEO of BrainChipTechnology executive leading the commercialization of neuromorphic AI processors focused on ultra-low-power edge inference. Oversees BrainChip's evolution from early engineering innovation to market-driven, customer-focused deployment.
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
By the end of 2026, AI capital expenditure is projected to hit nearly $700 billion. The question isn't who has the best model, but who has the most creative financing to build out AI infrastructure and beyond. Sarah Guo is joined by Neil Tiwari, Managing Director at Magnetar Capital, a financial innovator helping the AI industry scale from billions to trillions of dollars in CapEx. Neil explains some of the debt structures used to finance massive GPU clusters, who is taking the risk, and how the industry is maturing. Sarah and Neil also discuss how power distribution, energy storage, and physical materials like steel are the bottlenecks of the AI industry. Plus, Neil gives his take on the future of inference-optimized clouds, and why the market shift away from software and into infrastructure might be an overreaction. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Chapters: 00:00 – Cold Open 00:05 – Neil Tiwari Introduction 00:26 – Magnetar's Story 01:28 – Why CoreWeave Helped Magnetar Win 06:15 – Scaling CapEx Efficiently 09:02 – Debunking GPU Collateral Risk 11:42 – How Deal Structures Evolve 13:01 – What Bottlenecks Buildout 15:28 – Circular Financing Critiques 17:35 – The Shift from Training to Inference Workloads 23:10 – AI Factories 24:12 – Constraints of the Current Power Grid 28:27 – Sovereign Compute Buildouts 29:54 – Physical AI Capital Needs 32:48 – The Capital Rotation Away from SaaS 36:04 – Conclusion
Thu, 26 Feb 2026 21:45:00 GMT http://relay.fm/connected/592 http://relay.fm/connected/592 Federico Viticci, Stephen Hackett, and Myke Hurley Apple is hosting a mysterious media experience next week, and in anticipation of new products, Stephen, Myke, and Federico make predictions about what is coming. Apple is hosting a mysterious media experience next week, and in anticipation of new products, Stephen, Myke, and Federico make predictions about what is coming. clean 3983 Subtitle: Lil' ChippyApple is hosting a mysterious media experience next week, and in anticipation of new products, Stephen, Myke, and Federico make predictions about what is coming. This episode of Connected is sponsored by: Insta360: Introducing the Insta360 Wave and the Link 2 Pro. Sentry: Mobile crash reporting and app monitoring. New users get $100 in Sentry credits with code connected26. Squarespace: Save 10% off your first purchase of a website or domain using code CONNECTED. Links and Show Notes: Get Connected Pro: Preshow, postshow, no ads. Submit Feedback Apple in 2025: The Six Colors report card – Six Colors Six Colors' Apple in 2025 Report Card - MacStories My Full Responses for the 2025 Six Colors Report Card - 512 Pixels Upgrade #604: The Shifting Sands of Liquid Glass - Relay Samsung Galaxy S26/Ultra Impressions: 1 Crazy Display Feature! - MKBHD - YouTube Samsung Galaxy Unpacked 2026 in 12 minutes - The Verge - YouTube Introducing Perplexity Computer 2026 March Keynote Rickies – Rickies.net Keynote Rickies, March 2026 – Rickies.co Wood Blocks | Nintendo The MacBook Air's wedge is truly gone — and I miss it already | The Verge Leaker Says Apple's Lower-Cost MacBook Will Have These 8 Limitations - MacRumors M5 Pro chip could separate CPU and GPU in 'server grade' chips - 9to5Mac 2.5D integrated circuit - Wikipedia Apple Reportedly Agrees to 100% Price Hike on Samsung Memory Chips - MacRumors New ‘F1: Drive to Survive' season is coming to Apple TV - 9to5Mac Apple TV reveals new space-race thriller series is coming soon - 9to5Mac
DOD – Disrupter Disrupters China markets reopening after Lunar New Year Mexico Cartel Wars Refunds requested for the illegal tariffs PLUS we are now on Spotify and Amazon Music/Podcasts! Click HERE for Show Notes and Links DHUnplugged is now streaming live - with listener chat. Click on link on the right sidebar. Love the Show? Then how about a Donation? Follow John C. Dvorak on Twitter Follow Andrew Horowitz on Twitter Warm-Up - The CTP for Caterpillar announced - DOD - Disrupter Disrupters - China markets reopening after Lunar New Year - Mexico Cartel Wars (Jalisco) Markets - Mortgage Rates - looking good! - Tariffs found illegal - that is not stopping anything - Refunds requested for the illegal tariffs - Monday's big drop and AI taking a bite out of stock prices Tariffs - First, who actually knows what is going on. 100% chaos - Supreme court ruled illegal (6-3) - 10% flat across all countries immediately added - Wait a day and make that 15% - FedEx seeks refund for illegal IEEPA tariffs imposed by Trump after the Supreme Court ruled Trump's tariffs exceeded authority - Numerous lawsuits expected for IEEPA tariff refunds - Apple has spent more than $3 billion on tariffs since President Donald Trump enacted his trade policies. What about that? (HOW TO FIGURE OUT WHO GETS THE REFUND) --- Estimate that $175B tariffs have been collected alreay - A group of 22 U.S. Senate Democrats on Monday introduced legislation that would require President Donald Trump's administration to fully refund within 180 days all of the revenue, with interest, collected from tariffs struck down by the U.S. Supreme Court. - The legislation would require the Customs and Border Protection agency, which collects tariffs at U.S. ports of entry, to prioritize small businesses. - The U.S. Customs and Border Protection agency said it will halt collections of tariffs imposed under the International Emergency Economic Powers Act at 12:01 a.m. EST (0501 GMT) on Tuesday Stop The Presses - After years of JCD's rants....... - Apple will soon introduce MacBooks with touch screens - Apple Inc.'s initial touch Macs will have the Dynamic Island at the center top of the display and OLED screen technology. The new MacBook Pro models will have a refreshed, dynamic user interface that can shift between being optimized for touch or point-and-click input. Europe Reacts - "The current situation is not conducive to delivering 'fair, balanced, and mutually beneficial' transatlantic trade and investment, as agreed to by both sides" in the joint statement setting out the terms of last year's trade agreement, the Commission said. "A deal is a deal." - All active discussions are halted on any USA/Europe trade deal The Potential Winners - Brazil and China may be the winners here - Chinese President Xi Jinping has a boost in bargaining power after the US Supreme Court invalidated Donald Trump's broad emergency tariffs, a key point of leverage over China. - The removal of tariff threats will make it harder for Trump to press Xi for larger purchases of certain products and leaves him without a key weapon to strike back if Chinese negotiators make fresh demands. - Xi's team will likely push harder for access to advanced semiconductors, the removal of trade restrictions on Chinese companies, and reduced US support for self-ruled Taiwan, according to Wu Xinbo, director at Fudan University's Center for American Studies. NVDA Earnings - NVIDIA drops its fiscal Q4 2026 (ended Jan 2025) results tomorrow—another make-or-break moment for the AI trade. - The bar is sky-high after years of blowout beats, but whispers of "peak AI" and slowing growth momentum have investors on edge. --- Consensus Expectations : ----Revenue: ~$65.6–$66.1 billion (up ~67–68% YoY from last year's ~$39B; guided $65B ±2% in prior report) ------EPS (adjusted/non-GAAP): ~$1.50–$1.53 (up ~70–72% YoY from $0.89). --------Gross margins: Targeting ~75% non-GAAP (holding strong despite supply chain noise). -----------Key driver: Data Center segment expected to crush ~$58–$60B, fueled by Blackwell ramp and hyperscaler spend. Home Depot Earnings - The home-improvement retailer gained 2.7% after posting fourth-quarter adjusted earnings of $2.72 per share on revenues of $38.20 billion. - That exceeded the per-share earnings of $2.54 on revenues of $38.12 billion expected by analysts polled by LSEG. AMD News - The semiconductor maker rose about 11% after it inked a multiyear deal with Meta to lend up to 6 gigawatts of its graphics processing units to artificial intelligence data centers. - The cost of the deal is unclear, but the companies' agreement includes a a performance-based warrant that could amount to up to 160 million of AMD shares, according to a statement dated Tuesday. - Meta has committed to deploying up to 6 gigawatts (GW) of AMD's Instinct GPUs (high-end graphics processing units optimized for AI workloads) to power its massive AI data centers. - Analysts estimate the GPU portion alone could be worth $60–$100+ billion over 5+ years Mortgage Rates - The average rate on the popular 30-year fixed mortgage fell to 5.99% on Monday, according to Mortgage News Daily, matching its lowest levels since 2022. - Last year at this time the rate was 6.89%. - A buyer putting 20% down on the median priced home, about $400,000 according to the National Association of Realtors, would have a monthly payment of $1,916 for the principal and interest. One year ago, that payment would have been $2,105, a difference of $189. Life Insurance Record - Manulife Financial Corp. sold a $300 million life insurance policy in Singapore, topping what Guinness World Records certified as the most valuable policy ever issued. - The policy surpasses the previous record of $250 million, set by HSBC Life in Hong Kong in 2024. Manulife said in a statement Tuesday that the deal reflects growing demand from ultra-wealthy clients to preserve their assets. - In Singapore over the past 12 months, Manulife has issued 25 individual policies each worth more than $50 million. Bitcoin Rout - Gemini said it was axing as much as a quarter of its staff and exiting the UK, European Union and Australia entirely. - This week, it parted with its chief operating officer, chief financial officer and chief legal officer, all in a single day. - Its stock has fallen more than 80% from a post-listing high last year, collapsing its market value from a peak of almost $4 billion to under $700 million. Over the Greenland - USA sending a "hospital ship" over - Trump's post on the ship came hours after Denmark's Joint Arctic Command said it had evacuated a crew member who required urgent medical treatment from a U.S. submarine in Greenlandic waters, seven nautical miles outside of Greenland's capital, Nuuk. - Greenland said thanks but no thanks So Long! - U.S. investors are pulling money out of their own stock market at the fastest pace in at least 16 years as Big Tech returns fade and better-performing overseas markets look more attractive. - In the last six months, U.S.-domiciled investors have pulled some $75 billion from U.S. equity products, with $52 billion flowing out since the start of 2026 alone, the most in the first eight weeks of the year since at least 2010 AI Disruption - DOD (Disruption of Disrupters) - CrowdStrike -9.8% and other cybersecurity names under heavy pressure again as AI disruption fears build following Anthropic's Claude Code release - - Cybersecurity stocks are under broad pressure today, extending recent weakness following Friday's launch of Claude Code Security by Anthropic. Claude Code Security scans codebases for vulnerabilities and suggests software patches for human review, fueling a narrative that AI platforms may be moving more quickly into parts of the security workflow than investors had previously expected. For cybersecurity, that raises concern around the forward demand outlook and competitive positioning, particularly in areas tied to application security, cloud security, identity workflows, and security operations automation, where AI-native tools could start to narrow perceived differentiation. - The move suggests investors are still sorting through the implications for product overlap, pricing power, and competitive positioning as AI capabilities evolve quickly. - IBM shares dropping toward lows of the session; attributed to news that Claude can automate cobol modernization COBOL (Common Business-Oriented Language) is a high-level, English-like programming language created in 1959 for business, finance, and administrative data processing. It is renowned for its verbosity, readability, and reliability, processing massive amounts of transactions on mainframe systems,, notes NetCom Learning and IBM. Despite being decades old, it remains critical in banking, insurance, and government sectors. - It is estimated that 70-80% of the world's business transactions are processed by COBOL Grok's Prediction about Future of OpenAi/ChatGPT Scenario Likelihood (My Estimate) Key Factors Outcome for OpenAI/ChatGPT Thriving Leader Medium (40%) Sustained breakthroughs, partnerships (e.g., Microsoft), regulatory wins OpenAI as AI giant; ChatGPT as ecosystem hub for agents/robots Evolved Survivor High (50%) Adaptation to agents/hardware; mergers Exists but rebranded; ChatGPT integrated into daily life tools Decline/Acquisition Low (10%) Overcompetition, funding collapse Absorbed or legacy; ChatGPT commoditized or obsolete Quick check on Europe Shares - European company earnings growth is picking up this reporting season against a tentatively improving economic backdrop, but wary investors are demanding more than solid results to justify sky-high valuations. - Companies representing 57% of Europe's market capitalization have reported so far, achieving average earnings growth of 3.9% in the fourth quarter, ahead of estimates for a final result of a contraction of 1.1% --- That is a big differential.... +3.9 vs -1.1 Iran Talks - News over the weekend that Iran will look to discuss a variety of items and potentially get a deal.... energy, mining and aircraft - Best guess: Iran will string us along like Russia is doing and we will say we have some kind of bogus deal. --- There is some talk of US "going in" as we are building military presence. Supposedly there are some saying it could be a multi-week incursion. - What is the plan - Regime change? What is this? - A divided Supreme Court on Tuesday ruled that Americans can't sue the U.S. Postal Service, even when employees deliberately refuse to deliver mail. - By a 5-4 vote, the justices ruled against a Texas landlord, Lebene Konan, who alleges her mail was intentionally withheld for two years. Konan, who is Black, claims racial prejudice played a role in postal employees' actions. - Justice Clarence Thomas, writing for a majority of five conservative justices, said the federal law that generally shields the Postal Service from lawsuits over missing, lost and undelivered mail includes “the intentional nondelivery of mail.” - So can ballots just be thrown in garbage for mail-ins for one party that will throw out another party's? Love the Show? Then how about a Donation? HE CLOSEST TO THE PIN for CATERPILLAR Winners will be getting great stuff like the new "OFFICIAL" DHUnplugged Shirt! FED AND CRYPTO LIMERICKS See this week's stock picks HERE Follow John C. Dvorak on Twitter Follow Andrew Horowitz on Twitter
U svetu softvera, grešku rešavate jednostavnim patch-om. Ali kada razvijate hardver, svaka greška koju pronađete pre proizvodnje je besplatna, dok ona koju otkrijete tek na gotovom čipu košta milione dolara i mesece bačenog vremena. Kako izgleda raditi u industriji gde pravo na grešku praktično ne postoji? U trećoj epizodi Pojačalo specijala Next Silicon, Ivan razgovara sa Vladimirom Miloševićem, liderom tima za verifikaciju hardvera u ovoj kompaniji. Kroz razgovor otkrivamo fascinantan i kompleksan svet razvoja čipova - od početne ideje i arhitekture, preko rigoroznog testiranja pre proizvodnje, pa sve do finalne fizičke realizacije. Vladimir objašnjava zašto je verifikacija presudan korak u industriji gde je svaka greška izuzetno skupa i demistifikuje činjenicu da je Srbija, sa svojim centrima u Beogradu, Novom Sadu i Nišu, postala ozbiljan globalni "powerhouse" za razvoj najsavremenijeg hardvera. Fokus priče je na revolucionarnoj tehnologiji koju razvija Next Silicon, posebno na njihovom „Maverick 2“ čipu koji menja pravila igre u svetu superračunara i high-performance computinga (HPC). Saznaćete kako izgleda inženjerska avantura kreiranja hardvera koji se dinamički prilagođava softveru, rešavajući probleme energetske efikasnosti i brzine koje tradicionalni procesori (CPU i GPU) ne mogu da savladaju. Podržite nas na BuyMeACoffee: https://bit.ly/3uSBmoa Pročitajte transkript ove epizode: https://bit.ly/4cNdB9T Posetite naš sajt i prijavite se na našu mailing listu: http://bit.ly/2LUKSBG Prijavite se na naš YouTube kanal: http://bit.ly/2Rgnu7o Pratite Pojačalo na društvenim mrežama: FB: https://www.facebook.com/PojacaloRS/ IG: https://www.instagram.com/pojacalo.rs/ X: https://x.com/PojacaloRS LN: https://www.linkedin.com/company/pojacalo TikTok: https://www.tiktok.com/@pojacalo.rs
Editor's note: CuspAI raised a $100m Series A in September and is rumored to have reached a unicorn valuation. They have all-star advisors from Geoff Hinton to Yann Lecun and team of deep domain experts to tackle this next frontier in AI applications.In this episode, Max Welling traces the thread connecting quantum gravity, equivariant neural networks, diffusion models, and climate-focused materials discovery (yes, there is one!!!).We begin with a provocative framing: experiments as computation. Welling describes the idea of a “physics processing unit”—a world in which digital models and physical experiments work together, with nature itself acting as a kind of processor. It's a grounded but ambitious vision of AI for science: not replacing chemists, but accelerating them.Along the way, we discuss:* Why symmetry and equivariance matter in deep learning* The tradeoff between scale and inductive bias* The deep mathematical links between diffusion models and stochastic thermodynamics* Why materials—not software—may be the real bottleneck for AI and the energy transition* What it actually takes to build an AI-driven materials platformMax reflects on moving from curiosity-driven theoretical physics (including work with Gerard ‘t Hooft) toward impact-driven research in climate and energy. The result is a conversation about convergence: physics and machine learning, digital models and laboratory experiments, long-term ambition and incremental progress.Full Video EpisodeTimestamps* 00:00:00 – The Physics Processing Unit (PPU): Nature as the Ultimate Computer* Max introduces the idea of a Physics Processing Unit — using real-world experiments as computation.* 00:00:44 – From Quantum Gravity to AI for Materials* Brandon frames Max's career arc: VAE pioneer → equivariant GNNs → materials startup founder.* 00:01:34 – Curiosity vs Impact: How His Motivation Evolved* Max explains the shift from pure theoretical curiosity to climate-driven impact.* 00:02:43 – Why CaspAI Exists: Technology as Climate Strategy* Politics struggles; technology scales. Why materials innovation became the focus.* 00:03:39 – The Thread: Physics → Symmetry → Machine Learning* How gauge symmetry, group theory, and relativity informed equivariant neural networks.* 00:06:52 – AI for Science Is Exploding (Not Emerging)* The funding surge and why AI-for-Science feels like a new industrial era.* 00:07:53 – Why Now? The Two Catalysts Behind AI for Science* Protein folding, ML force fields, and the tipping point moment.* 00:10:12 – How Engineers Can Enter AI for Science* Practical pathways: curriculum, workshops, cross-disciplinary training.* 00:11:28 – Why Materials Matter More Than Software* The argument that everything—LLMs included—rests on materials innovation.* 00:13:02 – Materials as a Search Engine* The vision: automated exploration of chemical space like querying Google.* 01:14:48 – Inside CuspAI: The Platform Architecture* Generative models + multi-scale digital twin + experiment loop.* 00:21:17 – Automating Chemistry: Human-in-the-Loop First* Start manual → modular tools → agents → increasing autonomy.* 00:25:04 – Moonshots vs Incremental Wins* Balancing lighthouse materials with paid partnerships.* 00:26:22 – Why Breakthroughs Will Still Require Humans* Automation is vertical-specific and iterative.* 00:29:01 – What Is Equivariance (In Plain English)?* Symmetry in neural networks explained with the bottle example.* 00:30:01 – Why Not Just Use Data Augmentation?* The optimization trade-off between inductive bias and data scale.* 00:31:55 – Generative AI Meets Stochastic Thermodynamics* His upcoming book and the unification of diffusion models and physics.* 00:33:44 – When the Book Drops (ICLR?)TranscriptMax: I want to think of it as what I would call a physics processing unit, like a PPU, right? Which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known, as possible even. It's a bit hard to program because you have to do all these experiments. Those are quite bulky, it's like a very large thing you have to do. But in a way it is a computation and that's the way I want to see it. You can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in.[01:00:44:14 - 01:01:34:08]Brandon: Yeah, it's a pleasure to have Max Woehling as a guest today. Max has done so much over his career that I've been so excited about. If you're in the deep learning community, you probably know Max for his work on variational autocoders, which has literally stood the test of prime or officially stood the test of prime. If you are a scientist, you probably know him for his like, binary work on graph neural networks on equivariance. And if you're a material science, you probably know him about his new startup, CASPAI. Max has a long history doing lots of cool problems. You started in quantum gravity, which is I think very different than all of these other things you worked on. The first question for AI engineers and for scientists, what is the thread in how you think about problems? What is the thread in the type of things which excite you? And how do you decide what is the next big thing you want to work on?[01:01:34:08 - 01:02:41:13]Max: So it has actually evolved a lot. In my young days, let's breathe, I would just follow what I would find super interesting. I have kind of this sensor. I think many people have, but maybe not really sort of use very much, which is like, you get this feeling about getting very excited about some problem. Like it could be, what's inside of a black hole or what's at the boundary of the universe or what are quantum mechanics actually all about. And so I follow that basically throughout my career. But I have to say that as you get older, this changes a little bit in the sense that there's a new dimension coming to it and there's this impact. Going in two-dimensional quantum gravity, you pretty much guaranteed there's going to be no impact on what you do relative, maybe a few papers, but not in this world, this energy scale. As I get closer to retirement, which is fortunately still 10 years away or so, I do want to kind of make a positive impact in the world. And I got pretty worried about climate change.[01:02:43:15 - 01:03:19:11]Max: I think politics seems to have a hard time solving it, especially these days. And so I thought better work on it from the technology side. And that's why we started CaspAI. But there's also a lot of really interesting science problems in material science. And so it's kind of combining both the impact you can make with it as well as the interesting science. So it's sort of these two dimensions, like working on things which you feel there's like, well, there's something very deep going on here. And on the other hand, trying to build tools that can actually make a real impact in the world.[01:03:19:11 - 01:03:39:23]RJ: So the thread that when I look back, look at the different things that you worked out, some of them seem pretty connected, like the physics to equivariance and, yeah, and, uh, gravitational networks, maybe. And that seems to be somewhat related to Casp. Do you have a thread through there?[01:03:39:23 - 01:06:52:16]Max: Yeah. So physics is the thread. So having done, you know, spent a lot of time in theoretical physics, I think there is first very fundamental and exciting questions, like things that haven't actually been figured out in quantum gravity. So that is really the frontier. There's also a lot of mathematical tools that you can use, right? In, for instance, in particle physics, but also in general relativity, sort of symmetry space to play an enormously important role. And this goes all the way to gauge symmetries as well. And so applying these kinds of symmetries to, uh, machine learning was actually, you know, I thought of it as a very deep and interesting mathematical problem. I did this with Taco Cohen and Taco was the main driver behind this, went all the way from just simple, like rotational symmetries all the way to gauge symmetries on spheres and stuff like that. So, and, uh, Maurice Weiler, who's also here, um, when he was a PhD student, he was a very good student with me, you know, he wrote an entire book, which I can really recommend about the role of symmetries in AI and machine learning. So I find this a very deep and interesting problem. So more recently, so I've taken a sort of different path, which is the relationship between diffusion models and that field called stochastic thermodynamics. This is basically the thermodynamics, which is a theory of equilibrium. So but then formulated for out of equilibrium systems. And it turns out that the mathematics that we use for diffusion models, but even for reinforcement learning for Schrodinger bridges for MCMC sampling has the same mathematics as this theoretical, this physical theory of non-equilibrium systems. And that got me very excited. And actually, uh, when I taught a course in, um, Mauschenberg, uh, it is South Africa, close to Cape Town at the African Institute for Mathematical Sciences Ames. And I turned that into a book site. Two years later, the book was finished. I've sent it to the publisher. And this is about the deep relationship between free energy, diffusion models, basically generative AI and stochastic thermodynamics. So it's always some kind of, I don't know, I find physics very deep. I also think a lot about quantum mechanics and it's, it's, it's a completely weird theory that actually nobody really understands. And there's a very interesting story, which is maybe good to tell to connect sort of my PZ back to where I'm now. So I did my PZ with a Nobel Laureate, Gerard the toft. He says the most brilliant man I've ever met. He was never wrong about anything as long as I've seen him. And now he says quantum mechanics is wrong and he has a new theory of quantum mechanics. Nobody understands what he's saying, even though what he's writing down is not mathematically very complex, but he's trying to address this understandability, let's say of quantum mechanics head on. And I find it very courageous and I'm completely fascinated by it. So I'm also trying to think about, okay, can I actually understand quantum mechanics in a more mundane way? So that, you know, without all the weird multiverses and collapses and stuff like that. So the physics is always been the threat and I'm trying to apply the physics to the machine learning to build better algorithms.[01:06:52:16 - 01:07:05:15]Brandon: You are still very involved in understanding and understanding physics and the worlds. Yeah. And just like applications to machine learning or introducing no formalisms. That's really cool.[01:07:05:15 - 01:07:18:02]Max: Yes, I would say I'm not contributing much to physics, but I'm contributing to the interface between physics and science. And that's called AI for science or science or AI is kind of a super, it's actually a new discipline that's emerging.[01:07:18:02 - 01:07:18:19]Speaker 5: Yeah.[01:07:18:19 - 01:07:45:14]Max: And it's not just emerging, it's exploding, I would say. That's the better term because I know you go from investments into like in the hundreds of millions now in the billions. So there's now actually a startup by Jeff Bezos that is at 6.2 billion sheep round. Right. Insane. I guess it's the largest startup ever, I think. And that's in this field, AI for science. It tells you something that we are creating a new bubble here.[01:07:46:15 - 01:07:53:28]Brandon: So why do you think it is? What has changed that has motivated people to start working on AI for science type problems?[01:07:53:28 - 01:08:49:17]Max: So there's two reasons actually. One is that people have been applying sort of the new tools from AI to the sciences, which is quite natural. And there's of course, I think there's two big examples, protein folding is a big one. And the other one is machine learning forest fields or something called machine learning inter-atomic potentials. Both of them have been actually very successful. Both also had something to do with symmetries, which is a little cool. And sort of people in the AI sciences saw an opportunity to apply the tools that they had developed beyond advertised placement, right, or multimedia applications into something that could actually make a very positive impact in society like health, drug development, materials for the energy transition, carbon capture. These are all really cool, impactful applications.[01:08:50:19 - 01:09:42:14]Max: Despite that, the science and the kind of the is also very interesting. I would say the fact that these sort of these two fields are coming together and that we're now at the point that we can actually model these things effectively and move the needle on some of these sort of science sort of methodologies is also a very unique moment, I would say. People recognize that, okay, now we're at the cusp of something new, where it results whether the company is called after. We're at the cusp of something new. And of course that always creates a lot of energy. It's like, okay, there's something, it's like sort of virgin field. It's like nobody's green field. Nobody's been there. I can rush in and I can sort of start harvesting there, right? And I think that's also what's causing a lot of sort of enthusiasm in the fields.[01:09:42:14 - 01:10:12:18]RJ: If you're an AI engineer, basically if the people that listen to this podcast will be in the field, then you maybe don't have a strong science background. How does, but are excited. Most I would say most AI practitioners, BM engineers or scientists would consider themselves scientists and they have some background, a little bit of physics, a little bit of industry college, maybe even graduate school that have been working or are starting out. How does somebody who is not a scientist on a day-to-day basis, how do they get involved?[01:10:12:18 - 01:10:14:28]Max: Well, they can read my book once it's out.[01:10:16:07 - 01:11:05:24]Max: This is basically saying that there is more, we should create curricula that are on this interface. So I'm not sure there is, also we already have some universities actual courses you can take, maybe online courses you can take. These workshops where we are now are actually very good as well. And we should probably have more tutorials before the workshop starts. Actually we've, I've kind of proposed this at some point. It's like maybe first have an hour of a tutorial so that people can get new into the field. There's a lot out there. Most of it is of course inaccessible, but I would say we will create much more books and other contents that is more accessible, including this podcast I would say. So I think it will come. And these days you can watch videos and things. There's a huge amount of content you can go and see.[01:11:05:24 - 01:11:28:28]Brandon: So maybe a follow-up to that. How do people learn and get involved? But why should they get involved? I mean, we have a lot of people who are of our audience will be interested in AI engineering, but they may be looking for bigger impacts in the world. What opportunities does AI for science provide them to make an impact to change the world? That working in this the world of pure bits would not.[01:11:28:28 - 01:11:40:06]Max: So my view is that underlying almost everything is immaterial. So we are focusing a lot on LLMs now, which is kind of the software layer.[01:11:41:06 - 01:11:56:05]Max: I would say if you think very hard, underlying everything is immaterial. So underlying an LLM is a GPU, and underlying a GPU is a wafer on which we will have to deposit materials. Do we want to wait a little bit?[01:12:02:25 - 01:12:11:06]Max: Underlying everything is immaterial. So I was saying, you know, there's the LLM underlying the LLM is a GPU on which it runs. In order to make that GPU,[01:12:12:08 - 01:12:43:20]Max: you have to put materials down on a wafer and sort of shine on it with sort of EUV light in order to etch kind of the structures in. But that's now an actual material problem, because more or less we've reached the limits of scaling things down. And now we are trying to improve further by new materials. So that's a fundamental materials problem. We need to get through the energy transition fast if we don't want to kind of mess up this world. And so there is, for instance, batteries. That's a complete materials problem. There's fuel cells.[01:12:44:23 - 01:13:01:16]Max: There is solar panels. So that they can now make solar panels with new perovskite layers on top of the silicon layers that can capture, you know, theoretically up to 50% of the light, where now we're at, I don't know, maybe 22 or something. So these are huge changes all by material innovation.[01:13:02:21 - 01:13:47:15]Max: And yeah, I think wherever you go, you know, I can probably dig deep enough and then tell you, well, actually, the very foundation of what you're doing is a material problem. And so I think it's just very nice to work on this very, very foundation. And also because I think this is maybe also something that's happening now is we can start to search through this material space. This has never been the case, right? It's like scientists, the normal way of working is you read papers and then you come up with no hypothesis. You do an experiment and you learn, et cetera. So that's a very slow process. Now we can treat this as a search engine. Like we search the internet, we now search the space of all possible molecules, not just the ones that people have made or that they're in the universe, but all of them.[01:13:48:21 - 01:14:42:01]Max: And we can make this kind of fully automated. That's the hope, right? We can just type, it becomes a tool where you type what you want and something starts spinning and some experiments get going. And then, you know, outcome list of materials and then you look at it and say, maybe not. And then you refine your query a little bit. And you kind of do research with this search engine where a huge amount of computation and experimentation is happening, you know, somewhere far away in some lab or some data center or something like this. I find this a very, very promising view of how we can sort of build a much better sort of materials layer underneath almost everything. And also more sustainable materials. Our plastics are polluting the planet. If you come up with a plastic that kind of destroys itself, you know, after, I don't a few weeks, right? And actually becomes a fertilizer. These are things that are not impossible at all. These things can be done, right? And we should do it.[01:14:42:01 - 01:14:47:23]RJ: Can you tell us a little bit just generally about CUSBI and then I have a ton of questions.[01:14:47:23 - 01:14:48:15]Speaker 5: Yeah.[01:14:48:15 - 01:17:49:10]Max: So CUSBI started about 20 months ago and it was because I was worried about I'm still worried about climate change. And so I realized that in order to get, you know, to stay within two degrees, let's say, we would not only have to reduce our emissions to zero by 2050, but then, you know, another half century or even a century of removing carbon dioxide from the atmosphere, not by reducing your emissions, but actually removing it at a rate that's about half the rate that we now emit it. And that is a unsolved problem. But if we don't solve it, two degrees is not going to happen, right? It's going to be much more. And I don't think people quite understand how bad that can be, like four degrees, like very bad. So this technology needs to be developed. And so this was my and my co-founder, Chet Edwards, motivation to start this startup. And also because, you know, we saw the technology was ready, which is also very good. So if you're, you know, the time is right to do it. And yeah, so we now in the meanwhile, we've grown to about 40 people. We've kind of collected 130 million investment into the company, which is for a European company is quite a lot. I would say it's interesting that right after that, you know, other startups got even more. So that's kind of tells you how fast this is growing. But yeah, we are we are now at the we've built the platform, of course, but it's for a series of material classes and it needs to be constantly expanded to new material classes. And it can be more automated because, you know, we know putting LLMs in as the whole thing gets more and more automated. And now we're moving to sort of high throughput experimentation. So connecting the actual platform, which is computational, to the experiments so that you can get also get fast feedback from experiments. And I kind of think of experiments as something you do at the end, although that's what we've been doing so far. I want to think of it as what I would call a sort of a physics processing unit, like a PPU, right, which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known as possible, even. It's a bit hard to program because you have to do all these experiments. Those are quite, quite bulky. It's like a very large thing you have to do. But in a way, it is a computation. And that's the way I want to see it. So I want to you can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in. And that's the vision we have. We don't say super intelligence because I don't quite know what it means and I don't want to oversell it. But I do want to automate this process and give a very powerful tool in the hands of the chemists and the material scientists.[01:17:49:10 - 01:18:01:02]Brandon: That actually brings up a question I wanted to ask you. First of all, can you talk about your platform to like whatever degree, like explain kind of how it works and like what you your thought processes was in developing it?[01:18:01:02 - 01:20:47:22]Max: Yeah, I think it's been surprisingly, it's not rocket science, I would say. It's not rocket science in the sense of the design and basically the design that, you know, I wrote down at the very beginning. It's still more or less the design, although you add things like I wasn't thinking very much about multi-scale models and as the common are rated that actually multi-scale is very important. And the beginning, I wasn't thinking very much about self-driving labs. But now I think, you know, we are now at the stage we should be adding that. And so there is sort of bits and details that we're adding. But more or less, it's what you see in the slide decks here as well, which is there is a generative component that you have to train to generate candidates. And then there is a digital twin, multi-scale, multi-fidelity digital twin, which you walk through the steps of the ladder, you know, they do the cheap things first, you weed out everything that's obviously unuseful, and then you go to more and more expensive things later. And so you narrow things down to a small number. Those go into an experiment, you know, do the experiment, get feedback, etc. Now, things that also have been more recently added is sort of more agentic sort of parts. You know, we have agents that search the literature and come up with, you know, actually the chemical literature and come up with, you know, chemical suggestions for doing experiments. We have agents which sort of autonomously orchestrate all of the computations and the experiments that need to be done. You know, they're in various stages of maturity and they can be continuously improved, I would say. And so that's basically I don't think that part. There's rocket science, but, you know, the design of that thing is not like surprising. What is it's surprising hard to actually build it. Right. So that's that's the thing that is where the moat is in the data that you can get your hands on and the and actually building the platform. And I would say there's two people in particular I want to call out, which is Felix Hunker, who is actually, you know, building the scientific part of the platform and Sandra de Maria, who is building the sort of the skate that is kind of this the MLOps part of the platform. Yeah. And so and recently we also added sort of Aaron Walsh to our team, who is a very accomplished scientist from Imperial College. We're very happy about that. He's going to be a chief science officer. And we also have a partnerships team that sort of seeks out all the customers because I think this is one thing I find very important. In print, it's so complex to do to actually bring a material to the real world that you must do this, you know, in collaboration with sort of the domain experts, which are the companies typically. So we always we only start to invest in the direction if we find a good industrial partner to go on that journey with us.[01:20:47:22 - 01:20:55:12]Brandon: Makes a lot of sense. Over the evolution of the platform, did you find that you that human intervention, human,[01:20:56:18 - 01:21:17:01]Brandon: I guess you could start out with a pure, you could imagine two directions when you start up making everything purely automatic, automated, agentic, so on. And then later on, you like find that you need to have more human input and feedback different steps. Or maybe did you start out with having human feedback? You have lots of steps and then like kind of, yeah, figure out ways to remove, you know,[01:21:17:01 - 01:22:39:18]Max: that is the second one. So you build tools for you. So it's much more modular than you think. But it's like, we need these tools for this application. We need these tools. So you build all these tools, and then you go through a workflow actually in the beginning just manually. So you put them in a first this tool, then run this to them or this with sithery. So you put them in a workflow and then you figure out, oh, actually, you know, this this porous material that we are trying to make actually collapses if you shake it a bit. Okay, then you add a new tool that says test for stability. Right. Yeah. And so there's more and more tools. And then you build the agent, which could be a Bayesian optimizer, or it could be an actual other them, you know, maybe trained to be a good chemist that will then start to use all these tools in the right way in the right order. Yeah. Right. But in the beginning, it's like you as a chemist are putting the workflow together. And then you think about, okay, how am I going to automate this? Right. For one very easy question you can ask yourself is, you know, every time somebody who is not a super expert in DFT, yeah, and he wants to do a calculation has to go to somebody who knows DFT. And so could you start to automate that away, which is like, okay, make it so user friendly, so that you actually do the right DFT for the right problem and for the right length of time, and you can actually assess whether it's a good outcome, etc. So you start to automate smaller small pieces and bigger pieces, etc. And in the end, the whole thing is automated.[01:22:39:18 - 01:22:53:25]Brandon: So your philosophy is you want to provide a set of specific tools that make it so that the scientists making decisions are better informed and less so trying to create an automated process.[01:22:53:25 - 01:23:22:01]Max: I think it's this is sort of the same where you're saying because, yes, we want to automate, yeah, but we don't see something very soon where the chemists and the domain expert is out of the loop. Yeah, but it but it's a retreat, right? It's like, okay, so first, you need an expert to tell you precisely how to set the parameters of the DFT calculation. Okay, maybe we can take that out. We can maybe automate that, right? And so increasingly, more of these things are going to be removed.[01:23:22:01 - 01:23:22:19]Speaker 5: Yeah.[01:23:22:19 - 01:24:33:25]Max: In the end, the vision is it will be a search engine where you where somebody, a chemist will type things and we'll get candidates, but the chemist will still decide what is a good material and what is not a good material out of that list, right? And so the vision of a completely dark lab, where you can close the door and you just say, just, you know, find something interesting and then it will it will just figure out what's interesting and we'll figure out, you know, it's like, oh, I found this new material to blah, blah, blah, blah, right? That's not the vision I have. He's not for, you know, a long time. So for me, it's really empowering the domain experts that are sitting in the companies and in universities to be much faster in developing their materials. And I should say, it's also good to be a little humble at times, because it is very complicated, you know, to bring it to make it and to bring it into the real world. And there are people that are doing this for the entire lives. Yeah. Right. And it's like, I wonder if they scratch their head and say, well, you know, how are you going to completely automate that away, like in the next five years? I don't think that's going to happen at all.[01:24:35:01 - 01:24:39:24]Max: Yeah. So to me, it's an increasingly powerful tool in the hands of the chemists.[01:24:39:24 - 01:25:04:02]RJ: I have a question. You've talked before about getting people interested based on having, you know, sort of a big breakthrough in materials, incremental change. I'm curious what you think about the platform you have now in are sort of stepping towards and how are you chasing the big change or is this like incremental or is there they're not mutually exclusive, obviously, but what do you think about that?[01:25:04:02 - 01:26:04:27]Max: We follow a mixed strategy. So we are definitely going after a big material. Again, we do this with a partner. I'm not going to disclose precisely what it is, but we have our own kind of long term goal. You could call it lighthouse or, you know, sort of moonshot or whatever, but it is going to be a really impactful material that we want to develop as a proof point that it can be done and that it will make it into the into the real world and that AI was essential in actually making it happen. At the same time, we also are quite happy to work with companies that have more modest goals. Like I would say one is a very deep partnership where you go on a journey with a company and that's a long term commitment together. And the other one is like somebody says, I knew I need a force field. Can you help me train this force field and then maybe analyze this particular problem for me? And I'll pay you a bunch of money for that. And then maybe after that we'll see. And that's fine too. Right. But we prefer, you know, the deep partnerships where we can really change something for the good.[01:26:04:27 - 01:26:22:02]RJ: Yeah. And do you feel like from a platform standpoint you're ready for that or what are the things that and again, not asking you to disclose proprietary secret sauce, but what are the things generally speaking that need to happen from where we are to where to get those big breakthroughs?[01:26:22:02 - 01:28:40:01]Max: What I find interesting about this field is that every time you build something, it's actually immediately useful. Right. And so unlike quantum computing, which or nuclear fusion, so you work for 20, 30, 40 years and nothing, nothing, nothing, nothing. And then it has to happen. Right. And when it happens, it's huge. So it's quite different here because every time you introduce, so you go to a customer and you say, so what do you need? Right. So we work, let's say, on a problem like a water filtration. We want to remove PFAS from water. Right. So we do this with a company, Camira. So they are a deep partner for us. Right. So we on a journey together. I think that the breakthrough will happen with a lot of human in the loop because there is the chemists who have a whole lot more knowledge of their field and it's us who will help them with training, having a new message. And in that kind of interface, these interactions, something beautiful will happen and that will have to happen first before this field will really take off, I think. And so in the sense that it's not a bubble, let's put it that way. So that's people see that as actual real what's happening. So in the beginning, it will be very, you know, with a lot of humans in the loop, I would say, and I would I would hope we will have this new sort of breakthrough material before, you know, everything is completely automated because that will take a while. And also it is very vertical specific. So it's like completely automating something for problem A, you know, you can probably achieve it, but then you'll sort of have to start over again for problem B because, you know, your experimental setup looks very different in the machines that you characterize your materials look very different. Even the models in your platform will have to be retrained and fine tuned to the new class. So every time, you know, you have a lot of learnings to transfer, but also, you know, the problems are actually different. And so, yes, I would want that breakthrough material before it's completely automated, which I think is kind of a long term vision. And I would say every time you move to something new, you'll have to start retraining and humans will have to come in again and say, okay, so what does this problem look like? And now sort of, you know, point the the machine again, you know, in the new direction and then and then use it again.[01:28:40:01 - 01:28:47:17]RJ: For the non-scientists among us, me included a bit of a scientist. There's a lot of terminology. You mentioned DFT,[01:28:49:00 - 01:29:01:11]RJ: you equivariance we've talked about. Can you sort of explain in engineering terms or the level of sophistication and engineering? Well, how what is equivariance?[01:29:01:11 - 01:29:55:01]Max: So equivariance is the infusion of symmetry in neural networks. So if I build a neural network, let's say that needs to recognize this bottle, right, and then I rotate the bottle, it will then actually have to completely start again because it has no idea that the rotated bottle. Well, actually, the input that represents a rotated bottle is actually rotated bottle. It just doesn't understand that. Right. If you build equivariance in basically once you've trained it in one orientation, it will understand it in any other orientation. So that means you need a lot less data to train these models. And these are constraints on the weights of the model. So so basically you have to constrain the way such data to understand it. And you can build it in, you can hard code it in. And yeah, this the symmetry groups can be, you know, translations, rotations, but also permutations. I can graph neural network, their permutations and then physics, of course, as many more of these groups.[01:29:55:01 - 01:30:01:08]RJ: To pray devil's advocate, why not just use data augmentation by your bottle is in all the different orientations?[01:30:01:08 - 01:30:58:23]Max: As an option, it's just not exact. It's like, why would you go through the work of doing all that? Where you would really need an infinite number of augmentations to get it completely right. Where you can also hard code it in. Now, I have to say sometimes actually data augmentation works even better than hard coding the equivariance in. And this is something to do with the fact that if you constrain the optimization, the weights before the optimization starts, the optimization surface or objective becomes more complicated. And so it's harder to find good minima. So there is also a complicated interplay, I think, between the optimization process and these constraints you put in your network. And so, yeah, you'll hear kind of contradicting claims in this field. Like some people and for certain applications, it works just better than not doing it. And sometimes you hear other people, if you have a lot of data and you can do data augmentation, then actually it's easier to optimize them and it actually works better than putting the equivariance in.[01:30:58:23 - 01:31:07:16]Brandon: Do you think there's kind of a bitter lesson for mathematically founded models and strategies for doing deep learning?[01:31:07:16 - 01:31:46:06]Max: Yeah, ultimately it's a trade-off between data and inductive bias. So if your inductive bias is not perfectly correct, you have to be careful because you put a ceiling to what you can do. But if you know the symmetry is there, it's hard to imagine there isn't a way to actually leverage it. But yeah, so there is a bitter lesson. And one of the bitter lessons is you should always make sure your architecture is scale, unless you have a tiny data set, in which case it doesn't matter. But if you, you know, the same bitter lessons or lessons that you can draw in LLM space are eventually going to be true in this space as well, I think.[01:31:47:10 - 01:31:55:01]RJ: Can you talk a little bit about your upcoming book and tell the listeners, like, what's exciting about it? Yeah, I should read it.[01:31:55:01 - 01:33:42:20]Max: So this book is about, it's called Generative AI and Stochastic Thermodynamics. It basically lays bare the fact that the mathematics that goes into both generative AI, which is the technology to generate images and videos, and this field of non-equilibrium statistical mechanics, which are systems of molecules that are just moving around and relaxing to the ground state, or that you can control to have certain, you know, be in a certain state, the mathematics of these two is actually identical. And so that's fascinating. And in fact, what's interesting is that Jeff Hinton and Radford Neal already wrote down the variational free energy for machine learning a long time ago. And there's also Carl Friston's work on free energy principle and active entrance. But now we've related it to this very new field in physics, which is called stochastic thermodynamics or non-equilibrium thermodynamics, which has its own very interesting theorems, like fluctuation theorems, which we don't typically talk about, but we can learn a lot from. And I think it's just it can sort of now start to cross fertilize. When we see that these things are actually the same, we can, like we did for symmetries, we can now look at this new theory that's out there, developed by these very smart physicists, and say, okay, what can we take from here that will make our algorithms better? At the same time, we can use our models to now help the scientists do better science. And so it becomes a beautiful cross-fertilization between these two fields. The book is rather technical, I would say. And it takes all sorts of things that have been done as stochastic thermodynamics, and all sorts of models that have been done in the machine learning literature, and it basically equates them to each other. And I think hopefully that sense of unification will be revealing to people.[01:33:42:20 - 01:33:44:05]RJ: Wait, and when is it out?[01:33:44:05 - 01:33:56:09]Max: Well, it depends on the publisher now. But I hope in April, I'm going to give a keynote at ICLR. And it would be very nice if they have this book in my hand. But you know, it's hard to control these kind of timelines.[01:33:56:09 - 01:33:58:19]RJ: Yeah, I'm looking forward to it. Great.[01:33:58:19 - 01:33:59:25]Max: Thank you very much. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.latent.space/subscribe
In this episode of the AI Agent & Copilot Podcast, John Siefert, CEO of Dynamic Communities and host of the podcast, is joined by Christopher Lochhead, bestselling author of "Play Bigger," to explore the shift from knowledge worker to “creator capitalist.” Lochhead previews his new book, "Creator Capitalist," which he will officially launch at the 2026 AI Agent & Copilot Summit NA in San Diego, outlining how AI and agents are transforming value creation, careers, and leadership in the modern economy. Key Takeaway From Knowledge Worker to Creator Capitalist: Lochhead explains that for decades, professionals operated as “knowledge workers,” where “knowledge is power” and execution defined success. But now, AI and agents are "making the value of existing knowledge closer to free every day.” He argues that professionals must shift upstream, focusing on identifying new problems and creating new value rather than executing within existing systems. Execution Is No Longer the Differentiator: For years, leaders were told that “ideas are a dime a dozen” and that execution was everything. But Lochhead bluntly states, human beings "cannot out-execute a GPU.” As agents increasingly automate operational work, doubling down on efficiency won't protect careers. The Four Capitals Framework: Creator capitalists build a flywheel of four capitals: intellectual, relationship, reputational, and financial. Intellectual capital is your “different”— the differentiated insight and judgment you uniquely bring. Relationship capital determines whose calls get answered. Reputational capital is not a personal brand, but “an earned reputation for results.” Financial capital flows from creating massive value for others. Together, they compound into durable advantage. Radical Responsibility in the AI Era: Lochhead stresses personal accountability: “If your career is a function of somebody else…you're in trouble.” Waiting for an employer or title to define value is dangerous in a rapidly shifting environment. Instead, professionals must proactively design their trajectory, using AI as leverage to amplify their capabilities and create net-new value, rather than protect outdated roles. Out-Creating the Machine: The defining insight of the episode: “You can't out execute a GPU, but you can out-create one.” Siefert reinforces that curiosity, creativity, and critical thinking are not soft skills — they are survival skills. Those who embrace the creator capitalist mindset will not just adapt to AI disruption; they will become the most successful value creators in history. Visit Cloud Wars for more.
Witajcie w 92. odcinku Podcastu o technologii!W tym odcinku rozmawiamy o największej dorocznej premierze świata Androida, czyli nowym Samsungu Galaxy S26.Partnerem podcastu jest Insignis Media!Poznaj historię Nvidii i dowiedz się, jak firma produkująca GPU dla nerdów stała się najbogatszym przedsiębiorstwem na świecie.Kup książkę "Maszyna Myśląca": https://bit.ly/Maszyna_myślącaMamy dla was trzy egzemplarze książki!Jeśli chcesz otrzymać papierowy egzemplarz "Maszyny Myślącej" wyślij maila na kanalotechnologii@gmail.com wypisując w temacie tytuł. Książkę wyślemy pierwszym trzem osobom, które wyślą nam maila.Jeśli chcesz, żeby reklama Twojej firmy znalazła się w naszym podcaście, przyślij nam maila na kanalotechnologii@gmail.com - pod tym adresem możecie też kontaktować się z nami w sprawie Targów technologii.W tym odcinku poruszamy też takie tematy, jak:- głupoty Sama Altmana- trzęsienie ziemi w Xboksie- nadchodzące premiery Apple- płatności na zegarkach HuaweiZapraszamy do słuchania!Podcast nagrywamy przy użyciu następujących mikrofonów:- Marcin korzysta z mikrofonu Sennheiser MK4- Dawid korzysta z mikrofonu Logitech Yeti GX- Łukasz korzysta z mikrofonu Austrian Audio OC818Oprawa dźwiękowa podcastu powstaje przy użyciu wtyczek PSP Audioware. Z kodem kanalotechnologii zyskasz 10% rabatu na zakupy w pspaudioware.com.Subskrybuj Kanał o technologii!: https://linktr.ee/kanalotechnologiiObserwuj nas w mediach społecznościowych:https://twitter.com/kanal_o_techhttps://www.instagram.com/kanalotechnologii/https://www.facebook.com/kanalotechnologiiiDawid Kosiński: https://twitter.com/kosa64Marcin Połowianiuk: https://twitter.com/mpolowianiukŁukasz Kotkowski: https://twitter.com/lukaszkotkowskiDo usłyszenia!
Diffusion models changed how we generate images and video—now they're coming for text.In this episode, we sit down with Stefano Ermon, Stanford computer science professor and founder of Inception Labs, to unpack how diffusion works for language, why it can generate in parallel (instead of token-by-token), and what that means for latency, cost, and real-time AI products.We talk through:The simplest mental model for diffusion: generate a full draft, then refine it by “fixing mistakes”Why today's autoregressive LLM inference is often memory-bound—and why diffusion can shift it toward a more GPU-friendly compute profileWhere Mercury wins today (IDEs, voice/real-time agents, customer support, EdTech—anywhere humans can't wait)What changes (and what doesn't) for long context and architecture choicesThe real-world way to evaluate models in production: offline evals + the gold-standard A/B testStefano also shares what's next on Mercury's roadmap—especially around stronger planning and reasoning for agentic use cases.Try Mercury + learn more: inceptionlabs.aiFor more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.
AI demand is scaling and infrastructure complexity is rising. Vultr CEO JJ Kardwell returns to the Bloomberg Intelligence Tech Disruptors podcast with an update on the market's AI cloud demand. He spoke to BI tech analyst Woo Jin Ho about production AI workloads, GPU utilization and lifecycle economics, global data-center strategy, supply-chain constraints and capital discipline, as well as outlining how privately held Vultr is positioning for durable growth in the industry's next phase.
Moonshots host Peter Diamandis speaks with Ben Horowitz, cofounder and general partner at a16z, alongside regular cohosts Salim Ismail, Dave Blundin, and Dr. Alexander Wissner-Gross, about whether AI can or should be paused, what happened when Horowitz told a Biden administration official that regulating AI means regulating math, why crypto is the natural money for AI agents, and why the gap between AI capability and societal adoption may be wider than people think. This episode originally aired on Peter Diamandis's Moonshots podcast. Follow Peter H. Diamandis on X: https://x.com/PeterDiamandis Follow Ben Horowitz on X: https://twitter.com/bhorowitz Follow Salim Ismail on X: https://twitter.com/salimismail Follow Dave Blundin on X: https://twitter.com/DavidBlundin Follow Dr. Alexander Wissner-Gross on X: https://twitter.com/alexwg Listen to Moonshots: https://www.youtube.com/@peterdiamandis Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
professorjrod@gmail.comIn this episode, we explore the 'Pocket Revolution' that transformed not just the phone but the entire technology landscape. Discover how the iPhone's breakthrough in multi-touch science, silicon strategy, and platform economics reshaped IT skills development and technology education. We also discuss the impact of Apple's innovation on enterprise communication and how understanding these shifts can help you in your CompTIA exam prep and tech certification journey. Whether you're studying with a group or using a CompTIA study guide, this episode connects revolutionary tech history with practical IT skills development tips to help you succeed.We dive into the hidden engine of the mobile era: the App Store. By standardizing distribution, payments, security reviews, and SDKs, Apple transformed a device into an ecosystem that seeded ridesharing, mobile banking, creator tools, and on‑demand everything. Security became everyday: sandboxing, code signing, and direct OS updates reduced risk for consumers while biometrics and secure enclaves made cryptography feel effortless. At the same time, attention and data became currency. Push notifications, infinite feeds, and engagement loops pulled us into a new marketplace where design and business models overlapped with our habits and mental health.Underneath the experience, custom silicon changed the game. We break down how Apple's SoCs integrated CPU, GPU, and neural engines to enable on‑device AI, privacy‑first biometrics, and unmatched performance per watt. Then we zoom out: supply chains as geopolitical power, BYOD reshaping workplace control, and regulation arriving as smartphones turn into infrastructure. Finally, we ask where we go from here—AR overlays, wearables, and ambient computing—or a cognitive leap where AI becomes the interface. Subscribe, share with a friend who still misses their keyboard, and leave a review telling us what you think replaces the smartphone next.Support the showArt By Sarah/DesmondMusic by Joakim KarudLittle chacha ProductionsJuan Rodriguez can be reached atTikTok @ProfessorJrodProfessorJRod@gmail.com@Prof_JRodInstagram ProfessorJRod
Deze talkshow wordt mede mogelijk gemaakt door MSI. Alle meningen in deze video zijn onze eigen. MSI heeft inhoudelijk geen inspraak op de content en zien de video net als jullie hier voor het eerst op de site.Net als vorige week vrijdag en al die vrijdagen ervoor de week, zitten er drie heren klaar achter de kenmerkende desk van Gamekings. We hebben het over Huey, JJ en Koos. Ze gaan gezamenlijk het weekend inluiden met een nieuwe editie van Einde van de Week Live. EvdWL is de talkshow waarin we wekelijks het belangrijkste game gerelateerde nieuws doornemen. Met dit keer onder andere nieuws over Elder Scrolls 6 en een nieuwe engine. Verder het zesjarig bestaan van GeForce Now, de sluiting van Bluepoint Games en een nieuw onderzoek dat uitwijst dat gamen het qua aandacht aflegt tegen gokken, porno en crypto. Dit en vele andere gespreksonderwerpen komen voorbij in deze Einde van de Week Live van vrijdag 20 februari.Elder Scrolls 6 krijgt een nieuwe engineIn dit feestjaar kijken de drie ook terug op gaming rond de eeuwwisseling. De jaren waarin Gamekings begon en gevormd werd. Plus natuurlijk aan het einde altijd een luchtig stukje vermaak in de vorm van de rubriek Cool & Serious Uncool.Schaf de Katana 15 HX gaming laptop aan en krijg er Resident Evil Requiem gratis bijMSI zet deze week de Katana 15 HX in de spotlights. Het betreft een gaming laptop met daarin een 14e generatie Intel Core i7 HX processor, een NVIDIA GeForce RTX 5070 GPU, een 1TB SSD, een 15.6” 144Hz Full HD panel, 2x Type-C en 3x Type-A USB aansluitingen en een 4-zone RGB toetsenbord. Deze krachtige laptop is nu bij Bol voor een scherpe prijs verkrijgbaar en je krijgt er ook nog eens Resident Evil Requiem bij. En dat wil je...Stap nu over op KPN internet en krijg een Switch 2 of PS5 cadeauVanaf deze week heeft KPN een actie lopen die je als weldenkend gamer bijna niet links kunt laten liggen. Als je namelijk overstapt naar KPN internet en een tweejarig abonnement afneemt, kun je als welkomstcadeau kiezen uit een Switch 2 of een PS5. En we weten dat veel mensen nog een Switch 2 willen halen. Plus de gamers die in november GTA 6 willen spelen en nog geen PS5 hebben; die zijn ook erg gebaat bij deze actie. Hier vind je alle info, de voorwaarden en de plek om deze deal te sluiten. En oh ja, glasvezel is natuurlijk de beste optie (als die tenminste bij jou in de straat ligt). Heb je geen lag meer en op sommige plekken tot 4 g/bit up snelheden. Wel zo fijn als je onbekommerd wil gamen.
James Chai, Visiting Fellow at ISEAS and former policy advisor to Malaysia's Ministry of Economy, joins Jeremy Au to unpack how Malaysia is repositioning itself in an era defined by AI, semiconductors, and geopolitical rivalry. They explore the country's shift from oil, gas, and plantations toward advanced manufacturing, examine how decades of semiconductor clustering built a quiet but durable export engine, and discuss why Malaysia is now doubling down on data centers and rare earths. The conversation covers US China competition over chip supply chains, the strategic importance of fabrication and GPU ecosystems, and how rare earth processing may represent the most underappreciated leverage point in the global tech stack. James also explains why execution, not ambition, will determine whether Malaysia can capture long term value from these emerging industries. 02:30 Malaysia balances growth with redistribution: The strategy is to raise high value industries like semiconductors and rare earths while lifting the bottom 40 percent through social protection. 05:42 Semiconductor strength came from decades of compounding: Intel and other multinationals anchored early manufacturing, and local engineers accumulated expertise that later spun into globally competitive firms. 10:18 Clusters beat subsidies alone: Tight networks of engineers, spin offs, and long term continuity allowed Malaysia's chip ecosystem to survive volatility and keep upgrading. 21:05 China uses constraint as strategy: By limiting access to high end Nvidia GPUs, Beijing forces domestic firms to innovate faster and close critical design gaps. 29:45 Chips are not oil: Frontier GPUs power model training, but most real world AI use relies on inference, meaning older chips retain value longer than markets assume. 37:22 Data centers create investment headlines but unclear spillovers: Billions flow into Malaysia, yet long term value depends on whether local firms capture supply chain and technology capabilities. 44:10 Rare earth processing is the real choke point: Deposits are global, but China controls the complex multi step processing chain, making chemistry and technology control more strategic than mining alone. Watch, listen or read the full insight at https://www.bravesea.com/blog/james-chai-rare-earth-power Get transcripts, startup resources & community discussions at www.bravesea.com WhatsApp: https://whatsapp.com/channel/0029VakR55X6BIElUEvkN02e TikTok: https://www.tiktok.com/@jeremyau Instagram: https://www.instagram.com/jeremyauz Twitter: https://twitter.com/jeremyau LinkedIn: https://www.linkedin.com/company/bravesea English: Spotify | YouTube | Apple Podcasts Bahasa Indonesia: Spotify | YouTube | Apple Podcasts Chinese: Spotify | YouTube | Apple Podcasts Vietnamese: Spotify | YouTube | Apple Podcasts #MalaysiaEconomy #Semiconductors #RareEarths #DataCenters #USChinaTech #Geopolitics #AIStrategy #SupplyChains #IndustrialPolicy #BRAVEpodcast
The RAM-apocolypse continues of course, with hints of it hitting general manufacturers, and delay of gaming systems and even spinning harddrives. At least Micron is making some PCIe 6 drives you cannot have. Also, since we are sometimes audio geeks as well as PC, we talk about some bananas. Seriously. You'll just have to listen to get the scoop on bad Chrome extensions, bad Copilot, and bad password managers. Until then, enjoy Unread Tournament 2004!Timestamps:0:00 Intro01:15 Patreon03:22 Food with Josh05:20 Acer and ASUS caught up HEVC patent dispute07:15 Intel's new annual GPU cadence09:00 Micron is making PCI-E 6.0 SSDs that you can't have11:10 WD CEO says storage is already sold out for 202614:07 Warning - many consumer electronics companies will fail this year22:25 Sony may push PS6 launch as far as 202922:55 US reportedly removes two Chinese memory companies from banned list25:54 RTX 5090 LIGHTNING is 5090 USD (list price, anyhow)29:35 Audio dragged through the mud - and a banana34:34 (In)Security Corner45:07 Gaming Quick Hits50:35 Jeremy reviews 25 USD speakers from Cyber Acoustics56:53 Picks of the Week1:08:51 Outro ★ Support this podcast on Patreon ★
Anshel Sag hosts episode 242 of the rebranded 6G Podcast and introduces new co-host Mike Dano (Ookla), noting the industry's “5G lull” and a shift toward 6G discussions. They discuss 5G Americas shutting down operations after years as a spectrum- and standards-focused trade association, framing the closure as a sign of cooling 5G interest and flat-to-negative RAN sales. Anshel covers Samsung and KT achieving a 3 Gbps downlink in 7 GHz using Keysight 6G test equipment and X-MIMO, noting the unclear bandwidth used and emphasizing that 6G progress is still largely experimental with mixed commercialization timelines (2028–2030). They debate 7 GHz as a key 6G band, propagation challenges (referencing Wi‑Fi 6E/7), the fading focus on terahertz bands, China's earlier stance on 6 GHz, and potential limited initial 6G deployments. Mike highlights an Ookla report on 5G standalone showing improved battery life versus NSA (EE +22%, O2 +11%) and argues operators under-market SA benefits. Anshel explains T-Mobile's John Saw concept of “kinetic tokens” for low-latency AI in motion (physical AI) across device/edge/cloud, tying it to use cases like real-time translation (5G Advanced, 50 languages) and ISAC for tracking and supporting drones, plus discussion of NVIDIA-based AI-RAN strategies and skepticism about cost and monetization of GPUs in base stations. Mike raises broader concerns about the AI data center boom, citing a projected $710B hyperscaler investment in 2026, power constraints (natural gas, gas turbines/jet engines), private high-bandwidth inter-data-center traffic, and questions about whether telecoms can capture AI value versus hyperscalers, while noting sovereign AI opportunities in countries with fewer data centers. They close with Microsoft and Ericsson integrating Ericsson Advanced Enterprise Mobility into Windows 11 (piloted on Surface 5G) to simplify secure enterprise 5G laptop management with Intune and eSIM provisioning, and discuss why cellular laptops haven't broadly taken off (cost, plans, coverage) and how Apple's modems and multi-carrier services might change adoption.00:00 Welcome & New Co-Host Mike Dano Joins the 6G Podcast01:10 Why the Rebrand Now: 5G Lull, MWC & Samsung Unpacked Tease02:03 5G Americas Shuts Down: What It Says About the Market Cycle05:41 Samsung + KT Hit 3 Gbps in 7 GHz: Early 6G Trial Reality Check07:32 Where 6G Spectrum Lands: 7 GHz, Propagation, and Terahertz Hype Fades12:58 Ookla Report Spotlight: 5G Standalone Boosts Battery Life (and Why It Matters)17:54 Kinetic Tokens & Physical AI: T-Mobile's Vision for Low-Latency 6G22:51 Is T-Mobile's “GPU in Every Base Station” Plan Actually Viable?24:16 The Edge Compute Case: Double-Dipping GPUs for AI + XR Graphics26:29 AI Wearables, AR Glasses, and Why 6G Timing Could Favor T-Mobile28:27 The $710B Data Center Boom: What Hyperscaler Spend Means for Telecom30:36 Powering AI: Natural Gas, Turbines, and the Nuclear Buildout Debate31:25 Neo-Clouds & AI Transport: Private Backbone Links, Akamai GPU Rentals, and Wall Street Doubts37:40 Microsoft + Ericsson Bring Enterprise 5G Management Natively to Windows 1140:00 Why 5G Laptops Still Haven't Taken Off (Cost, Plans, Battery, Coverage)41:41 What Changes in 6G: Apple Modems, Multi-Carrier Service, and the Road Ahead (Wrap-Up)
This Week In Startups is made possible by:Gusto - Try Gusto today and get 3 months free at http://uber.com/ai-solutionsCrusoe Cloud - Reserve your capacity for the latest GPU's at http://uber.com/ai-solutionsUber AI Solutions - Book a demo today at http://uber.com/ai-solutions*Today's show: It's a packed show! We've got YouTuber and Openclaw enthusiast Matthew Berman, Ryan Yaneli, founder of Nextvisit, and Jason Grad, founder of Massive! We're all in on Openclaw, but we have no doubts there's still room in the market for a GIANT Openclaw consumer app to shift the paradigm. What will that look like? Will it be an app? Will it be baked into the iPhone? Let's explore!**Timestamps:* 00:00 Intro02:04 Why Matthew thinks Openclaw is not ready yet to be brought to the consumer04:45 Jason doesn't want hundreds of different apps, and thousands of tabs05:45 Why Ryan sees open claw giving consumers access to opportunities they couldn't have gotten to otherwise.07:02 Only 10% of people are technical enough to install openclaw08:16 Would Openclaw be better off as an app?08:27 *Gusto*. Check out the online payroll and benefits experts with software built specifically for small business and startups. Try Gusto today and get three months FREE at [Uber.com/twist](http://uber.com/ai-solutions)00:10:52 The killer use case that could bring Openclaw to the consumer00:12:13 Why Meta acquired Manus.00:15:13 How Ryan uses Openclaw in his personal life00:18:44 *Crusoe Cloud*: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit crusoe.ai/savings to reserve your capacity for the latest GPUs today.00:23:24 What Jason's “Clawpod” does00:24:38 Jason demos his Openclaw workflow00:28:23 *Uber AI Solutions -* Your trusted partner to get AI to work in the real world. Book a demo with them TODAY at http://uber.com/ai-solutions00:30:04 How Matt used Openclaw to figure out he's been having stomach issues00:32:27 What will be the ultimate UX for AI?00:38:53 Anthropic has patched the ability to use Openclaw through its pro plan!00:42:20 Matt and Jason hope for a multi-model future — but we haven't made progress!00:52:21 Jason has skepticisms about the Openclaw foundation00:52:59 Ryan predicts a new Openclaw fork coming from the shadows!00:54:21 Peter Steinberger is going to OpenAI, NOT to work with Openclaw… Will he “orphan” openclaw?00:58:19 does raspberry AI stand a chance against Apple?*Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.com/Check out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcp*Follow Lon:X: https://x.com/lons*Follow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelm*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis*Thank you to our partners:*Gusto*. Check out the online payroll and benefits experts with software built specifically for small business and startups. Try Gusto today and get three months FREE at [Uber.com/twist](http://uber.com/ai-solutions)*Crusoe Cloud*: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit [crusoe.ai/savings] to reserve your capacity for the latest GPUs today.*Uber AI Solutions -* Your trusted partner to get AI to work in the real world. Book a demo with them TODAY at [Uber.com/twist](http://uber.com/ai-solutions)Check out all our partner offers: https://partners.launch.co/*Check out Jason's suite of newsletters: https://substack.com/@calacanis*Follow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: [https://www.instagram.com/thisweekinstartups](https://www.instagram.com/thisweekinstartups/)TikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: [https://twistartups.substack.com](https://twistartups.substack.com/)
Voice used to be AI's forgotten modality — awkward, slow, and fragile. Now it's everywhere. In this reference episode on all things Voice AI, Matt Turck sits down with Neil Zeghidour, a top AI researcher and CEO of Gradium AI (ex-DeepMind/Google, Meta, Kyutai), to cover voice agents, speech-to-speech models, full-duplex conversation, on-device voice, and voice cloning.We unpack what actually changed under the hood — why voice is finally starting to feel natural, and why it may become the default interface for a new generation of AI assistants and devices.Neil breaks down today's dominant “cascaded” voice stack — speech recognition into a text model, then text-to-speech back out — and why it's popular: it's modular and easy to customize. But he argues it has two key downsides: chaining models adds latency, and forcing everything through text strips out paralinguistic signals like tone, stress, and emotion. The next wave, he suggests, is combining cascade-like flexibility with the more natural feel of speech-to-speech and full-duplex conversation.We go deep on full-duplex interaction (ending awkward turn-taking), the hardest unsolved problems (noisy real-world environments and multi-speaker chaos), and the realities of deploying voice at scale — including why models must be compact and when on-device voice is the right approach.Finally, we tackle voice cloning: where it's genuinely useful, what it means for deepfakes and privacy, and why watermarking isn't a silver bullet.If you care about voice agents, real-time AI, and the next generation of human-computer interaction, this is the episode to bookmark.Neil ZeghidourLinkedIn - https://www.linkedin.com/in/neil-zeghidour-a838aaa7/X/Twitter - https://x.com/neilzeghGradiumWebsite - https://gradium.aiX/Twitter - https://x.com/GradiumAIMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFirstMarkWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) Intro(01:21) Voice AI's big moment — and why we're still early(03:34) Why voice lagged behind text/image/video(06:06) The convergence era: transformers for every modality(07:40) Beyond Her: always-on assistants, wake words, voice-first devices(11:01) Voice vs text: where voice fits (even for coding)(12:56) Neil's origin story: from finance to machine learning(18:35) Neural codecs (SoundStream): compression as the unlock(22:30) Kyutai: open research, small elite teams, moving fast(31:32) Why big labs haven't “won” voice AI4(34:01) On-device voice: where it works, why compact models matter(46:37) The last mile: real-world robustness, pronunciation, uptime(41:35) Benchmarking voice: why metrics fail, how they actually test(47:03) Cascades vs speech-to-speech: trade-offs + what's next(54:05) Hardest frontier: noisy rooms, factories, multi-speaker chaos(1:00:50) New languages + dialects: what transfers, what doesn't(1:02:54 Hardware & compute: why voice isn't a 10,000-GPU game(1:07:27) What data do you need to train voice models?(1:09:02) Deepfakes + privacy: why watermarking isn't a solution(1:12:30) Voice + vision: multimodality, screen awareness, video+audio(1:14:43) Voice cloning vs voice design: where the market goes(1:16:32) Paris/Europe AI: talent density, underdog energy, what's next
"Dok svet priča o ChatGPT-ju, mi otkrivamo hardversku revoluciju iz Beograda koja omogućava da AI uopšte postoji, i to 20 puta brže od svega što ste videli.“ U drugoj epizodi serijala Pojačalo specijala u saradnji sa kompanijom Next Sillicon, Ivan razgovara sa Markom Skakunom, AI Team Leadom u njihovoj beogradskoj kancelariji, o revoluciji u svetu veštačke inteligencije i hardvera koji je pokreće. Marko pruža detaljan istorijski pregled evolucije kompjuterske snage – od generičkih CPU-ova, preko specijalizovanih GPU-ova, pa sve do ultra-efikasnih ASIC čipova. Kroz razgovor se prati i razvoj samog AI-ja, od ranih neuronskih mreža i kompjuterske vizije do "Transformer" arhitekture i "Scaling Laws" fenomena koji su omogućili pojavu masivnih jezičkih modela poput ChatGPT-ja, fundamentalno menjajući zahteve koje postavljamo pred hardver. U drugom delu, fokus se prebacuje na jedinstveni pristup koji NextSilicon primenjuje kako bi odgovorio na ove izazove. Marko detaljno objašnjava inovativnu "dataflow" arhitekturu koja se fundamentalno razlikuje od tradicionalnih rešenja, omogućavajući hardveru da bude fleksibilan, adaptivan i energetski efikasniji. Poseban akcenat je stavljen na beogradsku kancelariju, koja nije samo podrška, već ključni razvojni centar gde timovi rade na najnaprednijim aspektima tehnologije – od dizajna čipa do AI kompajlera. Kroz Markovu ličnu priču, saznajemo zašto je rad na ovakvim "cutting-edge" projektima u Srbiji postao ne samo moguć, već i izuzetno privlačan za vrhunske svetske stručnjake. Podržite nas na BuyMeACoffee: https://bit.ly/3uSBmoa Pročitajte transkript ove epizode: https://bit.ly/4kGroRD Posetite naš sajt i prijavite se na našu mailing listu: http://bit.ly/2LUKSBG Prijavite se na naš YouTube kanal: http://bit.ly/2Rgnu7o Pratite Pojačalo na društvenim mrežama: FB: https://www.facebook.com/PojacaloRS/ IG: https://www.instagram.com/pojacalo.rs/ X: https://x.com/PojacaloRS LN: https://www.linkedin.com/company/pojacalo TikTok: https://www.tiktok.com/@pojacalo.rs
Everyone says clean your data before you invest in AI. The problem is, your data is getting dirty faster than you can clean it. In this episode, Keith sits down with Joe Onisick, founder of UnicornIQ, to dig into why data hygiene is AI's most underestimated failure point — and why throwing more GPU power [...]
Graphics processing units (GPUs) have become the most important commodity in the AI boom — and have made Nvidia a multi-trillion dollar company. But the tensor processing unit (TPU) could present itself as competition for the GPU.TPUs are developed by Google specifically for AI workloads. And so far, Anthropic, OpenAI and Meta have reportedly made deals for Google's TPUs.Christopher Miller, historian at Tufts University and author of "Chip War: The Fight for the World's Most Critical Technology," explains what this could mean.
Graphics processing units (GPUs) have become the most important commodity in the AI boom — and have made Nvidia a multi-trillion dollar company. But the tensor processing unit (TPU) could present itself as competition for the GPU.TPUs are developed by Google specifically for AI workloads. And so far, Anthropic, OpenAI and Meta have reportedly made deals for Google's TPUs.Christopher Miller, historian at Tufts University and author of "Chip War: The Fight for the World's Most Critical Technology," explains what this could mean.