Podcasts about gpus

  • 1,257PODCASTS
  • 2,866EPISODES
  • 49mAVG DURATION
  • 2DAILY NEW EPISODES
  • Dec 15, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about gpus

Show all podcasts related to gpus

Latest podcast episodes about gpus

Stansberry Investor Hour
The Pitfalls to Be Wary of During AI's Growing Pains

Stansberry Investor Hour

Play Episode Listen Later Dec 15, 2025 61:02


In this week's Stansberry Investor Hour, Dan and Corey welcome Luke Lango to the show. Luke is the senior investment analyst at our corporate affiliate InvestorPlace. He has built a reputation for spotting tech stocks on the verge of major market breakouts.   Luke kicks things off by sharing his thoughts on what many consider to be the current "AI bubble." He follows that up with how the jobs market is going to transition as AI continues to develop and how the economy will fare during that period. And he provides data for how the AI data-center epicenter has impacted the housing market. (0:00)   Next, Luke discusses the shift from companies using graphics processing units ("GPUs") to tensor processing units ("TPUs") for their data centers and why this is taking place. He then gives his thoughts on whether Intel can become a viable competitor again in this market. And he highlights the risks around the AI companies being interconnected and feeding into each other. (18:53)   Finally, Luke expresses why he's pleased that Alphabet has begun to act as a competitor to Nvidia with its own TPUs. He also covers AI being used in ads and how companies like Meta Platforms have seen success with utilizing it in that area. The three all share how they're all using AI in their personal use cases. And Luke gives his thoughts on what the big investment themes are going to be for 2026. (39:01)

@HPCpodcast with Shahin Khan and Doug Black

- Nvidia H200 exports to China - H20, H200, Chinese chips: how do they stack up? - Few fast GPUs vs many slow GPUs - China's electricity production - Datacenter electricity use in the US - Cell-phone sized AI supercomputer - HPC at the edge - Regulating AI [audio mp3="https://orionx.net/wp-content/uploads/2025/12/HPCNB_20251215.mp3"][/audio] The post HPC News Bytes – 20251215 appeared first on OrionX.net.

This Week in Startups
Disney and OpenAI sign landmark deal… and we saw it coming! | E2223

This Week in Startups

Play Episode Listen Later Dec 13, 2025 57:58


This Week In Startups is made possible by:LinkedIn Ads - http://linkedin.com/thisweekinstartupsDevStats - https://www.devstats.com/twistCrusoe - https://crusoe.ai/buildToday's show: FINALLY, you can hang out with Kylo Ren and Olaf the Snowman… thanks to the magic of AI.On TWiST, we're digging into the mega OpenAI-Disney deal. Mickey is giving Sam Altman a $1 billion investment AND will allow is copyrighted characters to appear in Sora and ChatGPT images.Of course, Jason predicted this would happen WAY BACK during the summer months and even showed off his “Darth Calacanis” creation on the “All-In Podcast.”PLUS Amazon has been launching and pulling AI features from Prime Video… what gives? Jason's predictions on the coming AI blowback and who's on what side. Why he's so focused on Education, Health Care, and Housing as issues. AND why founders should always take calls from Big Companies, even if it might just be a fishing expedition.It's a new Friday TWiST! Check it out!Timestamps:(00:00) Lon joins Alex and Jason to talk about the big Disney-OpenAI deal bringing Disney characters to Sora(03:10) Jason totally called the Disney-OpenAI stuff on All-In(9:42) LinkedIn Ads: Start converting your B2B audience into high quality leads today. Launch your first campaign and get $250 FREE when you spend at least $250. Go to http://linkedin.com/thisweekinstartups to claim your credit.(18:59) DevStats - DevStats integrates your dev work and your business goals into a shared language that everyone can understand. Get 20% off, plus access to their dedicated Slack channel. Just go to https://www.devstats.com/twist.(20:15) Why Amazon Prime Video pulled its AI recaps and anime dubs(24:44) Who gets to set the rules around AI: The Debate Continues(26:13) Jason's predictions on the AI blowback coming in 2026… with clips!(30:11) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(31:21) Is AI here to help people or replace them?(35:55) It's all about EHH: Education, Health Care, Housing(40:47) How all of this and MORE will be impacted directly by AI automation(45:35) Why Alex wants to lower the temperature around AI Doomerism(51:19) JUST FOR FOUNDERS: When should you take a call from a BigCo?(53:45) Why Jason thinks just about everyone in media will lose to TikTok and YouTubeSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.com/Check out the TWIST500: https://twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcp*Follow Lon:X: https://x.com/lons*Follow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelm/*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis/*Thank you to our partners:(9:42) LinkedIn Ads: Start converting your B2B audience into high quality leads today. Launch your first campaign and get $250 FREE when you spend at least $250. Go to http://linkedin.com/thisweekinstartups to claim your credit.(18:59) DevStats - DevStats integrates your dev work and your business goals into a shared language that everyone can understand. Get 20% off, plus access to their dedicated Slack channel. Just go to https://www.devstats.com/twist.(30:11) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.

Late Confirmation by CoinDesk
THE MINING POD: ERCOT's 266 GW Surge, IREN's $2.3B Raise, GPUs > ASICs, Whatsminer M70

Late Confirmation by CoinDesk

Play Episode Listen Later Dec 12, 2025 41:44


This week in bitcoin mining news, ERCOT sees a 266 GW of interconnection requests in 2026, IREN closed a $2.3 billion convertible note offering, and GPUs are leaving ASICs in the dust. Subscribe to the Blockspace newsletter for market-making news as it hits the wire! Welcome back to The Mining Pod! Today, Ethan Vera, COO of Luxor, joins us as we dive into MicroBT's Whatsminer M70 launching into a challenging ASIC market, IREN's $2.3 billion convertible note offering, the precarious state of hashprice, Luxor's new GPU hardware sales business, the staggering 270% leap in ERCOT interconnection requests, and the controversial Cat bitcoin fork proposal aimed at filtering ordinals / inscriptions. Subscribe to the newsletter! https://newsletter.blockspacemedia.com **Notes:** - Hash price is below $40 per second - Three negative difficulty adjustments - Ercot requests leaped 270% in 2025 - 73% of requests from data centers - IREN raised $2.3B in convertible notes - M70 efficiency: 12.5 J/TH 00:00 Start 02:35 Difficulty Report by Luxor 07:26 IREN note 10:44 M70 launch 20:02 Luxor launches GPU trading 27:12 ERCOT LL requests up 270% in 2025 34:10 Cry Corner: another filter fork proposal

The Hardware Unboxed Podcast
How The DRAM Crisis Will Affect Gaming GPUs (feat. Ed from Sapphire)

The Hardware Unboxed Podcast

Play Episode Listen Later Dec 12, 2025 68:56


Episode 92: Edward Crisler from Radeon-exclusive AIB Sapphire joins the podcast to chat about the current GPU market. How will rising DRAM prices affect gaming GPUs? Can the GPU makers and AIBs absorb some of the increased cost? Also we talk about RDNA 4 and how successful it's been compared to previous generations, AMD's true market share, and of course, the Sapphire Puke box artCHAPTERS00:00 - Intro01:03 - RDNA 4 Launch at Sapphire05:11 - RDNA 4 vs Older Generations Success11:32 - The DRAM Crisis20:25 - AIBs Want More Control24:48 - Thoughts on 12VHPWR26:32 - How Are SKU Decisions Made?32:35 - Sapphire Puke35:27 - DRAM Pricing: What Can AMD and AIBs Do?44:50 - AI-Focused GPU Makers Owe Everything to Gamers50:56 - AMD's True Market Share59:05 - The Key to RDNA 4's Success1:03:13 - Outro with Ed's Favorite Sapphire GenerationSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.

Dumb Money LIVE
The AI Stock Nobody Understands

Dumb Money LIVE

Play Episode Listen Later Dec 12, 2025 59:17


AI isn't running out of GPUs… it's running out of power. Today on Dumb Money, the overlooked AI energy supplier that could be one of the most misunderstood stocks in the market.

Hashr8 Podcast
ERCOT's 266 GW Surge, IREN's $2.3B Raise, GPUs Eat ASICs, Whatsminer M70 Launch

Hashr8 Podcast

Play Episode Listen Later Dec 12, 2025 41:44


Subscribe to the Blockspace newsletter for market-making news as it hits the wire! Welcome back to The Mining Pod! Today, Ethan Vera, COO of Luxor, joins us as we dive into MicroBT's Whatsminer M70 launching into a challenging ASIC market, IREN's $2.3 billion convertible note offering, the precarious state of hashprice, Luxor's new GPU hardware sales business, the staggering 270% leap in ERCOT interconnection requests, and the controversial Cat bitcoin fork proposal aimed at filtering ordinals / inscriptions. Subscribe to the newsletter! https://newsletter.blockspacemedia.com **Notes:** - Hash price is below $40 per second - Three negative difficulty adjustments - Ercot requests leaped 270% in 2025 - 73% of requests from data centers - IREN raised $2.3B in convertible notes - M70 efficiency: 12.5 J/TH 00:00 Start 02:35 Difficulty Report by Luxor 07:26 IREN note 10:44 M70 launch 20:02 Luxor launches GPU trading 27:12 ERCOT LL requests up 270% in 2025 34:10 Cry Corner: another filter fork proposal

Web3 with Sam Kamani
332: Airbnb for Data Centers – How Aethir Is Powering the AI Boom with Distributed GPUs

Web3 with Sam Kamani

Play Episode Listen Later Dec 12, 2025 43:15


AI demand for GPUs is exploding – and most of that capacity is locked inside underused data centers.In this episode, I talk with Mark from Aethir, a decentralized GPU cloud that aggregates idle, enterprise-grade GPUs into a global network. We discuss how Aethir feels like AWS on the front end but works like “Airbnb for data centers” behind the scenes, why compute demand outpaces supply, and how they keep latency low across 90+ countries.Mark also explains Aethir's token and revenue model, their work with EigenLayer, and why he believes solo founders now have superpowers in an AI-native world.Nothing in this episode is financial or investment advice.Key timestamps[00:00:00] Intro: Sam introduces Mark and Aethir's decentralized GPU cloud.[00:01:00] Mark's journey: From oil and gas infra and biotech to building GPU infrastructure for AI.[00:04:00] What Aethir is: AWS-style GPU cloud on the front end, “Airbnb for data centers” on the back end.[00:06:00] Enterprise-only GPUs: Why they only use data-center-grade hardware and no consumer devices.[00:07:00] Exploding demand: GPU demand 6–8x supply, with inference-heavy apps driving the next wave.[00:14:00] Global coverage: 90+ countries and routing users to nearby nodes for low latency.[00:31:00] Business model: 20% protocol fee, 80% to GPU hosts, plus token rewards and staking for large clusters.[00:39:00] Solo founder era: Why one-person AI-native companies will be extremely powerful.[00:41:00] Mark's message: Focus on projects with strong fundamentals and keep building through cycles.Connecthttp://aethir.com/https://www.linkedin.com/company/aethir-limited/https://x.com/AethirCloudhttps://www.linkedin.com/in/markrydon/https://x.com/MRRydonDisclaimerNothing mentioned in this podcast is investment advice and please do your own research. It would mean a lot if you can leave a review of this podcast on Apple Podcasts or Spotify and share this podcast with a friend.Get featuredBe a guest on the podcast or contact us – https://www.web3pod.xyz/

The IT Pro Podcast
TPUs: Google's home advantage

The IT Pro Podcast

Play Episode Listen Later Dec 12, 2025 29:20


In the race to train and deploy generative AI models, companies have poured hundreds of billions of dollars into GPUs, chips that have become essential for the parallel processing needs of large language models.Nvidia alone has forecast $500 billion in sales across 2025 and 2026, driven largely by Jensen Huang, founder and CEO at Nvidia, recently stated that “inference has become the most compute-intensive phase of AI — demanding real-time reasoning at planetary scale”. Google is meeting these demands in its own way. Unlike other firms reliant on chips by Nvidia, AMD, and others, Google has long used its in-house ‘tensor processing units' (TPUs) for AI training and inference.What are the benefits and drawbacks of Google's reliance on TPUs? And how do its chips stack up against the competition?In this episode, Jane and Rory discuss TPUs – Google's specialized processors for AI and ML – and how they could help the hyperscaler outcompete its rivals.Read more:

The New Stack Podcast
Kubernetes GPU Management Just Got a Major Upgrade

The New Stack Podcast

Play Episode Listen Later Dec 11, 2025 35:26


Nvidia Distinguished Engineer Kevin Klues noted that low-level systems work is invisible when done well and highly visible when it fails — a dynamic that frames current Kubernetes innovations for AI. At KubeCon + CloudNativeCon North America 2025, Klues and AWS product manager Jesse Butler discussed two emerging capabilities: dynamic resource allocation (DRA) and a new workload abstraction designed for sophisticated AI scheduling.DRA, now generally available in Kubernetes 1.34, fixes long-standing limitations in GPU requests. Instead of simply asking for a number of GPUs, users can specify types and configurations. Modeled after persistent volumes, DRA allows any specialized hardware to be exposed through standardized interfaces, enabling vendors to deliver custom device drivers cleanly. Butler called it one of the most elegant designs in Kubernetes.Yet complex AI workloads require more coordination. A forthcoming workload abstraction, debuting in Kubernetes 1.35, will let users define pod groups with strict scheduling and topology rules — ensuring multi-node jobs start fully or not at all. Klues emphasized that this abstraction will shape Kubernetes' AI trajectory for the next decade and encouraged community involvement.Learn more from The New Stack about dynamic resource allocation: Kubernetes Primer: Dynamic Resource Allocation (DRA) for GPU WorkloadsKubernetes v1.34 Introduces Benefits but Also New Blind SpotsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Oxigênio
#208 – A infraestrutura da IA: o que são datacenters e os riscos que eles representam

Oxigênio

Play Episode Listen Later Dec 11, 2025 34:08


A inteligência artificial, em seus múltiplos sentidos, tem dominado a agenda pública e até mesmo o direcionamento do capital das grandes empresas de tecnologia. Mas você já parou para pensar na infraestrutura gigantesca que dê conta de sustentar o crescimento acelerado das IAs? O futuro e o presente da inteligência artificial passa pela existência dos datacenters. E agora é mais urgente que nunca a gente discutir esse assunto. Estamos vendo um movimento se concretizar, que parece mais uma forma de colonialismo digital: com a crescente resistência à construção de datacenters nos países no norte global, empresas e governos parecem estar convencidos a trazer essas infraestruturas imensas com todos os seus impactos negativos ao sul global. Nesse episódio Yama Chiodi e Damny Laya conversam com pesquisadores, ativistas e atingidos para tentar aprofundar o debate sobre a infraestrutura material das IAs. A gente conversa sobre o que são datacenters e como eles impactam e irão impactar nossas vidas. No segundo episódio, recuperamos movimentos de resistência a sua instalação no Brasil e como nosso país se insere no debate, seguindo a perspectiva de ativistas e de pesquisadores da área que estão buscando uma regulação mais justa para esses grandes empreendimentos.  ______________________________________________________________________________________________ ROTEIRO [ vinheta da série ] [ Começa bio-unit ] YAMA: A inteligência artificial, em seus múltiplos sentidos, tem dominado a agenda pública e até mesmo o direcionamento do capital das grandes empresas de tecnologia. Mas você já parou para pensar na infraestrutura gigantesca que dê conta de sustentar o crescimento acelerado das IA? DAMNY: O futuro e o presente da inteligência artificial passa pela existência dos data centers. E agora é mais urgente que nunca a gente discutir esse assunto. Estamos vendo um movimento se concretizar, que parece mais uma forma de colonialismo digital: com a crescente resistência à construção de datacenters nos países no norte global, empresas e governos parecem estar convencidos a trazer os datacenters com todos os seus impactos negativos ao sul global. YAMA: Nós conversamos com pesquisadores, ativistas e atingidos e em dois episódios nós vamos tentar aprofundar o debate sobre a infraestrutura material das IAs. No primeiro, a gente conversa sobre o que são datacenters e como eles impactam e irão impactar nossas vidas. DAMNY: No segundo, recuperamos movimentos de resistência a sua instalação no Brasil e como nosso país se insere no debate, seguindo a perspectiva de ativistas e de pesquisadores da área que estão buscando uma regulação mais justa para esses grandes empreendimentos. [ tom baixo ] YAMA: Eu sou o Yama Chiodi, jornalista de ciência e pesquisador do campo das mudanças climáticas. Se você já é ouvinte do oxigênio pode ter me ouvido aqui na série cidade de ferro ou no episódio sobre antropoceno. Ao longo dos últimos meses investiguei os impactos ambientais das inteligências artificiais para um projeto comum entre o LABMEM, o laboratório de mudança tecnológica, energia e meio ambiente, e o oxigênio. Em setembro passado, o Damny se juntou a mim pra gente construir esses episódios juntos. E não por acaso. O Damny publicou em outubro passado um relatório sobre os impactos socioambientais dos data centers no Brasil, intitulado “Não somos quintal de data center”. O link para o relatório completo se encontra disponível na descrição do episódio. Bem-vindo ao Oxigênio, Dam. DAMNY: Oi Yama. Obrigado pelo convite pra construir junto esses episódios. YAMA: É um prazer, meu amigo. DAMNY: Eu também atuo como jornalista de ciência e sou pesquisador de governança da internet já há algum tempo. Estou agora trabalhando como jornalista e pesquisador aqui no LABJOR, mas quando escrevi o relatório eu tava trabalhando como pesquisador-consultor na ONG IDEC, Instituto de Defesa de Consumidores. YAMA: A gente começa depois da vinheta. [ Termina Bio Unit] [ Vinheta Oxigênio ] [ Começa Documentary] YAMA: Você já deve ter ouvido na cobertura midiática sobre datacenters a formulação que te diz quantos litros de água cada pergunta ao chatGPT gasta. Mas a gente aqui não gosta muito dessa abordagem. Entre outros motivos, porque ela reduz o problema dos impactos socioambientais das IA a uma questão de consumo individual. E isso é um erro tanto político como factual. Calcular quanta água gasta cada pergunta feita ao ChatGPT tira a responsabilidade das empresas e a transfere aos usuários, escondendo a verdadeira escala do problema. Mesmo que o consumo individual cresça de modo acelerado e explosivo, ele sempre vai ser uma pequena fração do problema. Data centers operam em escala industrial, computando quantidades incríveis de dados para treinar modelos e outros serviços corporativos. Um único empreendimento pode consumir em um dia mais energia do que as cidades que os abrigam consomem ao longo de um mês. DAMNY: Nos habituamos a imaginar a inteligência artificial como uma “nuvem” etérea, mas, na verdade, ela só existe a partir de data centers monstruosos que consomem quantidades absurdas de recursos naturais. Os impactos sociais e ambientais são severos. Data centers são máquinas de consumo de energia, água e terra, e criam poluição do ar e sonora, num modelo que reforça velhos padrões de racismo ambiental. O desenvolvimento dessas infraestruturas frequentemente acontece à margem das comunidades afetadas, refazendo a cartilha global da injustiça ambiental. Ao seguir suas redes, perceberemos seus impactos em rios, no solo, no ar, em territórios indígenas e no crescente aumento da demanda por minerais críticos e, por consequência, de práticas minerárias profundamente destrutivas. YAMA: De acordo com a pesquisadora Tamara Kneese, diretora do programa de Clima, Tecnologia e Justiça do instituto de pesquisa Data & Society, com quem conversamos, essa infraestrutura está criando uma nova forma de colonialismo tecnológico. Os danos ambientais são frequentemente direcionados para as comunidades mais vulneráveis, de zonas rurais às periferias dos grandes centros urbanos, que se tornam zonas de sacrifício para o progresso dessa indústria. DAMNY: Além disso, a crescente insatisfação das comunidades do Norte Global com os data centers tem provocado o efeito colonial de uma terceirização dessas estruturas para o Sul Global. E o Brasil não apenas não é exceção como parece ser um destino preferencial por sua alta oferta de energia limpa. [pausa] E com o aval do governo federal, que acaba de publicar uma medida provisória chamada REDATA, cujo objetivo é atrair data centers ao Brasil com isenção fiscal e pouquíssimas responsabilidades. [ Termina Documentary] [tom baixo ] VOICE OVER: BLOCO 1 – O QUE SÃO DATA CENTERS? YAMA: Pra entender o que são data centers, a gente precisa antes de tudo de entender que a inteligência artificial não é meramente uma nuvem etérea que só existe virtualmente. Foi assim que a gente começou nossa conversa com a pesquisadora estadunidense Tamara Kneese. Ela é diretora do programa de Clima, Tecnologia e Justiça do instituto de pesquisa Data & Society. TAMARA: PT – BR [ Eu acho que o problema da nossa relação com a computação é que a maioria parte do tempo a gente não pensa muito sobre a materialidade dos sistemas informacionais e na cadeia de suprimentos que permitem que eles existam. Tudo que a gente faz online não depende só dos nossos aparelhos, ou dos serviços de nuvem que a gente contrata, mas de uma cadeia muito maior. De onde ver o hardware que a gente usa? Que práticas de trabalho são empregadas nessa cadeia? E então, voltando à cadeia de suprimentos, pensar sobre os materiais brutos e os minerais críticos e outras formas de extração, abusos de direitos humanos e trabalhistas que estão diretamente relacionados à produção dos materiais que precisamos pra computação em geral. ] So I think, you know, the problem with our relationship to computing is that, most of the time, we don’t really think that much about the materiality of the computing system and the larger supply chain. You know, thinking about the fact that, of course, everything we do relies not just on our own device, or the particular cloud services that we subscribe to, but also on a much larger supply chain. So, where does the hardware come from, that we are using, and what kind of labor practices are going into that? And then be, you know, further back in the supply chain, thinking about raw materials and critical minerals and other forms of extraction, and human rights abuses and labor abuses that also go into the production of the raw materials that we need for computing in general. DAMNY: A Tamara já escreveu bastante sobre como a metáfora da nuvem nos engana, porque ela dificulta que a gente enxergue a cadeia completa que envolve o processamento de tantos dados. E isso se tornou uma questão muito maior com a criação dos chatbots e das IAs generativas. YAMA: Se a pandemia já representou uma virada no aumento da necessidade de processamento de dados, quando passamos a ir à escola e ao trabalho pelo computador, o boom das IA generativas criou um aumento sem precedentes da necessidade de expandir essas cadeias. DAMNY: E na ponta da infraestrutura de todas as nuvens estão os data centers. Mais do que gerar enormes impactos sócio-ambientais, eles são as melhores formas de enxergar que o ritmo atual da expansão das IAs não poderá continuar por muito tempo, por limitações físicas. Não há terra nem recursos naturais que deem conta disso. YAMA: A gente conversou com a Cynthia Picolo, que é Diretora Executiva do LAPIN, o Laboratório de Políticas Públicas e Internet. O LAPIN tem atuado muito contra a violação de direitos na implementação de data centers no Brasil e a gente ainda vai conversar mais sobre isso. DAMNY: Uma das coisas que a Cynthia nos ajudou a entender é como não podemos dissociar as IAs dos data centers. CYNTHIA: Existe uma materialidade por trás. Existe uma infraestrutura física, que são os data centers. Então os data centers são essas grandes estruturas que são capazes de armazenar, processar e transferir esses dados, que são os dados que são os processamentos que vão fazer com que a inteligência artificial possa acontecer, possa se desenvolver, então não existe sem o outro. Então falar de IA é falar de Datacenter. Então não tem como desassociar. YAMA: Mas como é um datacenter? A Tamara descreve o que podemos ver em fotos e vídeos na internet. TAMARA: [ Sim, de modo geral, podemos dizer que os data centers são galpões gigantes de chips, servidores, sistemas em redes e quando você olha pra eles, são todos muitos parecidos, prédios quadrados sem nada muito interessante. Talvez você nem saiba que é um data center se não observar as luzes e perceber que é uma estrutura enorme sem pessoas, sem trabalhadores. ] Yeah, so, you know, essentially, they’re like giant warehouses of chips, of servers, of networked systems, and, you know, they look like basically nondescript square buildings, very similar. And you wouldn’t really know that it’s a data center unless you look at the lighting, and you kind of realize that something… like, it’s not inhabited by people or workers, really. DAMNY: No próximo bloco a gente tenta resumir os principais problemas socioambientais que os data centers já causam e irão causar com muita mais intensidade no futuro. [tom baixo ] VOICE OVER: BLOCO 2 – A ENORME LISTA DE PROBLEMAS YAMA: O consumo de energia é provavelmente o problema mais conhecido dos data centers e das IAs. Segundo dados da Agência Internacional de Energia, a IEA, organização internacional da qual o Brasil faz parte, a estimativa para o ano de 2024 é que os data centers consumiram cerca de 415 TWh. A cargo de comparação, segundo a Empresa de Pesquisa Energética, instituto de pesquisa público associado ao Ministério das Minas e Energia, o Brasil consumiu no ano de 2024 cerca de 600 TWh. DAMNY: Segundo o mesmo relatório da Agência Internacional de Energia, a estimativa é que o consumo de energia elétrica por datacenters em 2030 vai ser de pelo menos 945 TWh, o que representaria 3% de todo consumo global projetado. Quando a gente olha pras estimativas de outras fontes, contudo, podemos dizer que essas são projeções até conservadoras. Especialmente considerando o impacto da popularização das chamadas LLM, ou grandes modelos de linguagem – aqueles YAMA: Ou seja, mesmo com projeções conservadoras, os data centers do mundo consumiriam em 2030, daqui a menos de cinco anos, cerca de 50% a mais de energia que o Brasil inteiro consome hoje. Segundo a IEA, em 2030 o consumo global de energia elétrica por data centers deve ser equivalente ao consumo da Índia, o país mais populoso do mundo. E há situações locais ainda mais precárias. DAMNY: É o caso da Irlanda. Segundo reportagem do New York Times publicada em outubro passado, espera-se que o consumo de energia elétrica por data centers por lá represente pelo menos 30% do consumo total do país nos próximos anos. Mas porquê os datacenters consomem tanta energia? TAMARA: [ Então, particularmente com o tipo de IA que as empresas estão investindo agora, há uma necessidade de chips e GPUs muito mais poderosos, de modo que os data centers também são sobre prover energia o suficiente pra todo esse poder computacional que demandam o treinamento e uso de grandes modelos de linguagem. Os data centers são estruturas incrivelmente demandantes de energia e água. A água em geral serve para resfriar os servidores, então tem um número considerável de sistemas de cooling que usam água. Além disso tudo, você também precisa de fontes alternativas de energia, porque algumas vezes, uma infraestrutura tão demandante de energia precisa recorrer a geradores para garantir que o data center continue funcionando caso haja algum problema na rede elétrica. ] So, you know, particularly with the kinds of AI that companies are investing in right now, there’s a need for more powerful chips, GPUs, and so Data centers are also about providing enough energy and computational power for these powerful language models to be trained and then used. And so the data center also, you know, in part because it does require so much energy, and it’s just this incredibly energy-intensive thing, you also need water. And the water comes from having to cool the servers, and so… So there are a number of different cooling systems that use water. And then on top of that, you also need backup energy sources, so sometimes, because there’s such a draw on the power grid, you have to have backup generators to make sure that the data center can keep going if something happens with the grid. YAMA: E aqui a gente começa a entender o tamanho do problema. Os data centers são muitas vezes construídos em lugares que já sofrem com infraestruturas precárias de eletricidade e com a falta de água potável. Então eles criam problemas de escassez onde não havia e aprofundam essa escassez em locais onde isso já era uma grande questão – como a região metropolitana de Fortaleza sobre a qual falaremos no próximo episódio, que está em vias de receber um enorme data center do Tiktok. DAMNY: É o que também relatam os moradores de Querétaro, no México, que vivem na região dos data centers da Microsoft. A operação dos data centers da Microsoft gerou uma crise sem precedentes, com quedas frequentes de energia e o interrompimento do abastecimento de água que muitas vezes duram semanas. Os data-centers impactaram de tal forma as comunidades que escolas cancelaram aulas e, indiretamente, foram responsáveis por uma crise de gastroenterite entre crianças. YAMA: E isso nos leva pro segundo ponto. O consumo de água, minerais críticos e outros recursos naturais. TAMARA: [O problema da energia tem recebido mais atenção, porque é uma fonte de ansiedade também. Pensar sobre o aumento da demanda de energia em tempos em que supostamente estaríamos transicionando para deixar de usar energias fósseis, o que obviamente pode ter efeitos devastadores. Mas eu acredito que num nível mais local, o consumo de água é mais relevante. Nós temos grandes empresas indo às áreas rurais do México, por exemplo, e usando toda a água disponível e basicamente deixando as pessoas sem água. E isso é incrivelmente problemático. Então isso acontece em áreas que já tem problemas de abastecimento de água, onde as pessoas já não tem muito poder de negociação com as empresas. Não têm poder político pra isso. São lugares tratados como zonas de sacrifício, algo que já vimos muitas vezes no mundo, especialmente em territórios indígenas. Então as consequências são na verdade muito maiores do que só problemas relacionados à energia. ] I think the energy problem has probably gotten the most attention, just because it is a source of anxiety, too, so thinking about, you know, energy demand at a time when we’re supposed to be transitioning away from fossil fuels. And clearly, the effects that that can have will be devastating. But I think on a local level, things like the water consumption can matter more. So, you know, if we have tech companies moving into rural areas in Mexico and, you know, using up all of their water and basically preventing people in the town from having access to water. That is incredibly problematic. So I think, you know, in water-stressed areas and areas where the people living in a place don’t have as much negotiating power with the company. Don’t have as much political power, and especially if places are basically already treated as sacrifice zones, which we’ve seen repeatedly many places in the world, with Indigenous land in particular, you know, I think the consequences may go far beyond just thinking about, you know, the immediate kind of energy-related problems. YAMA: Existem pelo menos quatro fins que tornam os data centers máquinas de consumir água. O mais direto e local é a água utilizada na refrigeração de todo equipamento que ganha temperatura nas atividades de computação, o processo conhecido como cooling. Essa prática frequentemente utiliza água potável. Apesar de já ser extremamente relevante do ponto de vista de consumo, essa é apenas uma das formas de consumo abundante de água. DAMNY: Indiretamente, os data centers também consomem a água relacionada ao seu alto consumo de energia, em especial na geração de energia elétrica em usinas hidrelétricas e termelétricas. Também atrelada ao consumo energético, está o uso nas estações de tratamento de água, que visam tratar a água com resíduos gerada pelo data center para tentar reduzir a quantidade de água limpa utilizada. YAMA: Por fim, a cadeia de suprimentos de chips e servidores que compõem os data centers requer água ultrapura e gera resíduos químicos. Ainda que se saiba que esse fator gera gastos de água e emissões de carbono relevantes, os dados são super obscuros, entre outros motivos, porque a maioria dos dados que temos sobre o consumo de água em data centers são fornecidos pelas próprias empresas. CYNTHIA: A água e os minérios são componentes também basilares para as estruturas de datacenter, que são basilares para o funcionamento da inteligência artificial. (…). E tem toda uma questão, como eu disse muitas vezes, captura um volume gigante de água doce. E essa água que é retornada para o ecossistema, muitas vezes não é compensada da água que foi capturada. Só que as empresas também têm uma promessa em alguns relatórios, você vai ver que elas têm uma promessa até de chegar em algum ponto para devolver cento e vinte por cento da água. Então a empresa está se comprometendo a devolver mais água do que ela capturou. Só que a realidade é o quê? É outra. Então, a Google, por exemplo, nos últimos cinco anos, reportou um aumento de cento e setenta e sete por cento do uso de água. A Microsoft mais trinta e oito e a Amazon sequer reporta o volume de consumo de água. Então uma lacuna tremenda para uma empresa desse porte, considerando todo o setor de Data centers. Mas tem toda essa questão da água, que é muito preocupante, não só por capturar e o tratamento dela e como ela volta para o meio ambiente, mas porque há essa disputa também com territórios que têm uma subsistência muito específica de recursos naturais, então existe uma disputa aí por esse recurso natural entre comunidade e empreendimento. DAMNY: Nessa fala da Cynthia a gente observa duas coisas importantes: a primeira é que não existe data center sem água para resfriamento, de modo que o impacto local da instalação de um empreendimento desses é uma certeza irrefutável. E é um dano contínuo. Enquanto ele estiver em operação ele precisará da água. É como se uma cidade de grande porte chegasse de repente, demandando uma quantidade de água e energia que o local simplesmente não tem para oferecer. E na hora de escolher entre as pessoas e empreendimentos multimilionários, adivinha quem fica sem água e com a energia mais cara? YAMA: A segunda coisa importante que a Cynthia fala é quando ela nos chama a atenção sobre a demanda por recursos naturais. Nós sabemos que recursos naturais são escassos. Mais do que isso, recursos naturais advindos da mineração têm a sua própria forma de impactos sociais e ambientais, o que vemos frequentemente na Amazônia brasileira. O que acontecerá com os data centers quando os recursos naturais locais já não forem suficientes para seu melhor funcionamento? Diante de uma computação que passa por constante renovação pela velocidade da obsolescência, o que acontece com o grande volume de lixo eletrônico gerado por data centers? Perguntas que não têm resposta. DAMNY: A crise geopolítica em torno dos minerais conhecidos como terra-rara mostra a complexidade política e ambiental do futuro das IA do ponto de vista material e das suas cadeias de suprimento. No estudo feito pelo LAPIN, a Cynthia nos disse que considera que esse ponto do aumento da demanda por minerais críticos que as IA causam é um dos pontos mais opacos nas comunicações das grandes empresas de tecnologia sobre o impacto de seus data centers. CYNTHIA: E outro ponto de muita, muita lacuna, que eu acho que do nosso mapeamento, desses termos mais de recursos naturais. A cadeia de extração mineral foi o que mais foi opaco, porque, basicamente, as empresas não reportam nada sobre essa extração mineral e é muito crítico, porque a gente sabe que muitos minérios vêm também de zonas de conflito. Então as grandes empresas, pelo menos as três que a gente mapeou, elas têm ali um trechinho sobre uma prestação de contas da cadeia mineral. Tudo que elas fazem é falar que elas seguem um framework específico da OCDE sobre responsabilização. YAMA: Quando as empresas falam de usar energias limpas e de reciclar a água utilizada, eles estão se desvencilhando das responsabilidades sobre seus datacenters. Energia limpa não quer dizer ausência de impacto ambiental. Pras grandes empresas, as fontes de energia limpa servem para gerar excedente e não para substituir de fato energias fósseis. Você pode ter um data center usando majoritariamente energia solar no futuro, mas isso não muda o fato de que ele precisa funcionar 24/7 e as baterias e os geradores a diesel estarão sempre lá. Além disso, usinas de reciclagem de água, fazendas de energia solar e usinas eólicas também têm impactos socioambientais importantes. O uso de recursos verdes complexifica o problema de identificar os impactos locais e responsabilidades dos data centers, mas não resolve de nenhuma forma os problemas de infraestrutura e de fornecimento de água e energia causados pelos empreendimentos. DAMNY: É por isso que a gente alerta pra não comprar tão facilmente a história de que cada pergunta pro chatGPT gasta x litros de água. Se você não perguntar nada pro chatGPT hoje, ou se fizer 1000 perguntas, não vai mudar em absolutamente nada o alto consumo de água e os impactos locais destrutivos dos data centers que estão sendo instalados a todo vapor em toda a América Latina. A quantidade de dados e de computação que uma big tech usa para treinar seus modelos, por exemplo, jamais poderá ser equiparada ao consumo individual de chatbots. É como comparar as campanhas que te pedem pra fechar a torneira ao escovar os dentes, enquanto o agro gasta em minutos água que você não vai gastar na sua vida inteira. Em resumo, empresas como Google, Microsoft, Meta e Amazon só se responsabilizam pelos impactos diretamente causados por seus data centers e, mesmo assim, é uma responsabilização muito entre aspas, à base de greenwashing. Você já ouviu falar de greenwashing? CYNTHIA: Essa expressão em inglês nada mais é do que a tradução literal, que é o discurso verde. (…)É justamente o que a gente está conversando. É justamente quando uma empresa finge se preocupar com o meio ambiente para parecer sustentável, mas, na prática, as ações delas não trazem esses benefícios reais e, pelo contrário, às vezes trazem até danos para o meio ambiente. Então, na verdade, é uma forma até de manipular, ou até mesmo enganar as pessoas, os usuários daqueles sistemas ou serviços com discursos e campanhas com esses selos verdes, mas sem comprovar na prática. YAMA: Nesse contexto, se torna primordial que a gente tenha mais consciência de toda a infraestrutura material que está por trás da inteligência artificial. Como nos resumiu bem a Tamara: TAMARA: [ Eu acredito que ter noção da infraestrutura completa que envolve a cadeia da IA realmente ajuda a entender a situação. Mesmo que você esteja usando, supostamente, energia renovável para construir e operar um data center, você ainda vai precisar de muitos outros materiais, chips, minerais e outras coisas com suas próprias cadeias de suprimento. Ou seja, independente da forma de energia utilizada, você ainda vai causar dano às comunidades e destruição ambiental. ] But that… I think that is why having a sense of the entire AI supply chain is really helpful, just in terms of thinking about, you know, even if you’re, in theory, using renewable energy to build a data center, you still are relying on a lot of other materials, including chips, including minerals, and other things that. (…) We’re still, you know, possibly going to be harming communities and causing environmental disruption. [ tom baixo ] YAMA: Antes de a gente seguir pro último bloco, eu queria só dizer que a entrevista completa com a Dra. Tamara Kneese foi bem mais longa e publicada na íntegra no blog do GEICT. O link para a entrevista tá na descrição do episódio, mas se você preferir pode ir direto no bloco do GEICT. [ tom baixo ] VOICE OVER: BLOCO 3 – PROBLEMAS GLOBAIS, PROBLEMAS LOCAIS YAMA: Mesmo conhecendo as cadeias, as estratégias de greenwashing trazem um grande problema à tona, que é uma espécie de terceirização das responsabilidades. As empresas trazem medidas compensatórias que não diminuem em nada o impacto local dos seus data centers. Então tem uma classe de impactos que são globais, como as emissões de carbono e o aumento da demanda por minerais críticos, por exemplo. E globais no sentido de que eles são parte relevante dos impactos dos data centers, mas não estão impactando exatamente nos locais onde foram construídos. CYNTHIA: Google, por exemplo, nesse recorte que a gente fez da pesquisa dos últimos cinco anos, ela simplesmente reportou um aumento de emissão de carbono em setenta e três por cento. Não é pouca coisa. A Microsoft aumentou no escopo dois, que são as emissões indiretas, muito por conta de data centers, porque tem uma diferenciação por escopo, quando a gente fala de emissão de gases, a Microsoft, nesse período de cinco anos, ela quadruplicou o tanto que ela tem emitido. A Amazon aumentou mais de trinta por cento. Então a prática está mostrando que essas promessas estão muito longe de serem atingidas. Só que aí entra um contexto mais de narrativa. Por que elas têm falado e prometido a neutralidade de carbono? Porque há um mecanismo de compensação. (…) Então elas falam que estão correndo, correndo para atingir essa meta de neutralidade de carbono, mas muito por conta dos instrumentos de compensação, compensação ou de crédito de carbono ou, enfim, para uso de energias renováveis. Então se compra esse certificado, se fazem esses contratos, mas, na verdade, não está tendo uma redução de emissão. Está tendo uma compensação. (…) Essa compensação é um mecanismo financeiro, no final do dia. Porque, quando você, enquanto empresa, trabalha na compensação dos seus impactos ambientais e instrumentos contratuais, você está ignorando o impacto local. Então, se eu estou emitindo impactando aqui o Brasil, e estou comprando crédito de carbono em projetos em outra área, o impacto local do meu empreendimento está sendo ignorado. YAMA: E os impactos materiais locais continuam extremamente relevantes. Além do impacto nas infraestruturas locais de energia e de água sobre as quais a gente já falou, há muitas reclamações sobre a poluição do ar gerada pelos geradores, as luzes que nunca desligam e até mesmo a poluição sonora. A Tamara nos contou de um caso curioso de um surto de distúrbios de sono e de enxaqueca que tomou regiões de data centers nos Estados Unidos. TAMARA: [ Uma outra coisa que vale ser lembrada: as pessoas que vivem perto dos data centers tem nos contado que eles são super barulhentos, eles também relatam a poluição visual causada pelas luzes e a poluição sonora. Foi interessante ouvir de comunidades próximas a data centers de mineração de criptomoedas, por exemplo, que os moradores começaram a ter enxaquecas e distúrbios de sono por viverem próximos das instalações. E além de tudo isso, ainda tem a questão da poluição do ar, que é visível a olho nu. Há muitas partículas no ar onde há geradores movidos a diesel para garantir que a energia esteja sempre disponível. ] And the other thing is, you know, for people who live near them, they’re very loud, and so if you talk to people who live near data centers, they will talk about the light pollution, the noise pollution. And it’s been interesting, too, to hear from communities that are near crypto mining facilities, because they will complain of things like migraine headaches and sleep deprivation from living near the facilities. And, you know, the other thing is that the air pollution is quite noticeable. So there’s a lot of particulate matter, particularly in the case of using diesel-fueled backup generators as an energy stopgap. DAMNY: E do ponto de vista dos impactos locais, há um fator importantíssimo que não pode ser esquecido: território. Data centers podem ser gigantes, mas ocupam muito mais espaço que meramente seus prédios, porque sua cadeia de suprimentos demanda isso. Como a água e a energia chegarão até os prédios? Mesmo que sejam usados fontes renováveis de energia, onde serão instaladas as fazendas de energia solar ou as usinas de energia eólica e de tratamento de água? Onde a água contaminada e/ou tratada será descartada? Quem vai fiscalizar? YAMA: E essa demanda sem fim por território esbarra justamente nas questões de racismo ambiental. Porque os territórios que são sacrificados para que os empreendimentos possam funcionar, muito frequentemente, são onde vivem povos originários e populações marginalizadas. Aqui percebemos que a resistência local contra a instalação de data centers é, antes de qualquer coisa, uma questão de justiça ambiental. É o caso de South Memphis nos Estados Unidos, por exemplo. TAMARA: [ Pensando particularmente sobre os tipos de danos causados pelos data centers, não é somente a questão da conta de energia ficar mais cara, ou quantificar a quantidade de energia e água gasta por data centers específicos. A verdadeira questão, na minha opinião, é a relação que existe entre esses danos socioambientais, danos algorítmicos e o racismo ambiental e outras formas de impacto às comunidades que lidam com isso a nível local. Especialmente nos Estados Unidos, com todo esse histórico de supremacia branca e a falta de direitos civis, não é coincidência que locais onde estão comunidades negras, por exemplo, sejam escolhidos como zonas de sacrifício. As comunidades negras foram historicamente preferenciais para todo tipo de empreendimento que demanda sacrificar território, como estradas interestaduais, galpões da Amazon… quer dizer, os data centers são apenas a continuação dessa política histórica de racismo ambiental. E tudo isso se soma aos péssimos acordos feitos a nível local, onde um prefeito e outras lideranças governamentais pensam que estão recebendo algo de grande valor econômico. Em South Memphis, por exemplo, o data center é da xAI. Então você para pra refletir como essa plataforma incrivelmente racista ainda tem a audácia de poluir terras de comunidades negras ainda mais ] I think, the way of framing particular kinds of harm, so, you know, it’s not just about, you know, people’s energy bills going up, or, thinking about how we quantify the energy use or the water use of particular data centers, but really thinking about the relationship between a lot of those social harms and algorithmic harms and the environmental racism and other forms of embodied harms that communities are dealing with on that hyper-local level. And, you know, in this country, with its history of white supremacy and just general lack of civil rights, you know, a lot of the places where Black communities have traditionally been, tend to be, you know, the ones sacrificed for various types of development, like, you know, putting up interstates, putting up warehouses for Amazon and data centers are just a continuation of the what was already happening. And then you have a lot of crooked deals on the local level, where, you know, maybe a mayor and other local officials think that they’re getting something economically of value. In South Memphis, the data center is connected to x AI. And so thinking about this platform that is so racist and so incredibly harmful to Black communities, you know, anyway, and then has the audacity to actually pollute their land even more. DAMNY: Entrando na questão do racismo ambiental a gente se encaminha para o nosso segundo episódio, onde vamos tentar entender como o Brasil se insere na questão dos data centers e como diferentes setores da população estão se organizando para resistir. Antes de encerrar esse episódio, contudo, a gente traz brevemente pra conversa dois personagens que vão ser centrais no próximo episódio. YAMA: Eles nos ajudam a compreender como precisamos considerar a questão dos territórios ao avaliar os impactos. Uma dessas pessoas é a Andrea Camurça, do Instituto Terramar, que está lutando junto ao povo Anacé pelo direito de serem consultados sobre a construção de um data center do TIKTOK em seus territórios. Eu trago agora um trechinho dela falando sobre como mesmo medidas supostamente renováveis se tornam violações territoriais num contexto de racismo ambiental. ANDREA: A gente recebeu notícias agora, recentemente, inclusive ontem, que está previsto um mega empreendimento solar que vai ocupar isso mais para a região do Jaguaribe, que vai ocupar, em média, de equivalente a seiscentos campos de futebol. Então, o que isso representa é a perda de terra. É a perda de água. É a perda do território. É uma diversidade de danos aos povos e comunidades tradicionais que não são reconhecidos, são invisibilizados. Então é vendido como território sem gente, sendo que essas energias chegam dessa forma. Então, assim a gente precisa discutir sobre energias renováveis. A gente precisa discutir sobre soberania energética. A gente precisa discutir sobre soberania digital, sim, mas construída a partir da necessidade do local da soberania dessas populações. DAMNY: A outra pessoa que eu mencionei é uma liderança Indígena, o cacique Roberto Anacé. Fazendo uma ótima conexão que nos ajuda a perceber como os impactos globais e locais dos data centers estão conectados, ele observa como parecemos entrar num novo momento do colonialismo, onde a soberania digital e ambiental do Brasil volta a estar em risco, indo de encontro à violação de terras indígenas. CACIQUE ROBERTO: Há um risco para a questão da biodiversidade, da própria natureza da retirada da água, do aumento de energia, mas também não somente para o território da Serra, mas para todos que fazem uso dos dados. Ou quem expõe esses dados. Ninguém sabe da mão de quem vai ficar, quem vai controlar quem vai ordenar? E para que querem essa colonização? Eu chamo assim que é a forma que a gente tem essa colonização de dados. Acredito eu que a invasão do Brasil em mil e quinhentos foi de uma forma. Agora nós temos a invasão de nossas vidas, não somente para os indígenas, mas de todos, muitas vezes que fala muito bem, mas não sabe o que vai acontecer depois que esses dados estão guardados. Depois que esses dados vão ser utilizados, para que vão ser utilizados, então esses agravos. Ele é para além do território indígena na série. [ tom baixo ] [ Começa Bio Unit ] YAMA: A pesquisa, entrevistas e apresentação desse episódio foi feita pelo Damny Laya e por mim, Yama Chiodi. Eu também fiz o roteiro e a produção. Quem narrou a tradução das falas da Tamara foi Mayra Trinca. O Oxigênio é um podcast produzido pelos alunos do Laboratório de Estudos Avançados em Jornalismo da Unicamp e colaboradores externos. Tem parceria com a Secretaria Executiva de Comunicação da Unicamp e apoio do Serviço de Auxílio ao Estudante, da Unicamp. Além disso, contamos com o apoio da FAPESP, que financia bolsas como a que nos apoia neste projeto de divulgação científica. DAMNY: A lista completa de créditos para os sons e músicas utilizados você encontra na descrição do episódio. Você encontra todos os episódios no site oxigenio.comciencia.br e na sua plataforma preferida. No Instagram e no Facebook você nos encontra como Oxigênio Podcast. Segue lá pra não perder nenhum episódio! Aproveite para deixar um comentário. [ Termina Bio Unit ] [ Vinheta Oxigênio ] Créditos: Aerial foi composta por Bio Unit; Documentary por Coma-Media. Ambas sob licença Creative Commons. Os sons de rolha e os loops de baixo são da biblioteca de loops do Garage Band. Roteiro, produção: Yama Chiodi Pesquisa: Yama Chiodi, Damny Laya Narração: Yama Chiodi, Danny Laya, Mayra Trinca Entrevistados: Tamara Kneese, Cynthia Picolo, Andrea Camurça e Cacique Roberto Anacé __________ Descendo a toca do coelho da IA: Data Centers e os Impactos Materiais da “Nuvem” – Uma entrevista com Tamara Kneese: https://www.blogs.unicamp.br/geict/2025/11/06/descendo-a-toca-do-coelho-da-ia-data-centers-e-os-impactos-materiais-da-nuvem-uma-entrevista-com-tamara-kneese/ Não somos quintal de data centers: Um estudo sobre os impactos socioambientais e climáticos dos data centers na América Latina: https://idec.org.br/publicacao/nao-somos-quintal-de-data-centers Outras referências e fontes consultadas: Relatórios técnicos e dados oficiais: IEA (2025), Energy and AI, IEA, Paris https://www.iea.org/reports/energy-and-ai, Licence: CC BY 4.0 “Inteligência Artificial e Data Centers: A Expansão Corporativa em Tensão com a Justiça Socioambiental”. Lapin. https://lapin.org.br/2025/08/11/confira-o-relatorio-inteligencia-artificial-e-data-centers-a-expansao-corporativa-em-tensao-com-a-justica-socioambiental/ Estudo de mercado sobre Power & Cooling de Data Centers. DCD – DATA CENTER DYNAMICS.https://media.datacenterdynamics.com/media/documents/Report_Power__Cooling_2025_PT.pdf Pílulas – Impactos ambientais da Inteligência Artificial. IPREC. https://ip.rec.br/publicacoes/pilulas-impactos-ambientais-da-inteligencia-artificial/ Policy Brief: IA, data centers e os impactos ambientais. IPREC https://ip.rec.br/wp-content/uploads/2025/05/Policy-Paper-IA-e-Data-Centers.pdf MEDIDA PROVISÓRIA Nº 1.318, DE 17 DE SETEMBRO DE 2025 https://www.in.gov.br/en/web/dou/-/medida-provisoria-n-1.318-de-17-de-setembro-de-2025-656851861 Infográfico sobre minerais críticos usados em Data Centers do Serviço de Geologia do Governo dos EUA https://www.usgs.gov/media/images/key-minerals-data-centers-infographic Notícias e reportagens: From Mexico to Ireland, Fury Mounts Over a Global A.I. Frenzy. Paul Mozur, Adam Satariano e Emiliano Rodríguez Mega. The New York Times, 20/10/2025. https://www.nytimes.com/2025/10/20/technology/ai-data-center-backlash-mexico-ireland.html Movimentos pedem ao MP fim de licença de data center no CE. Maristela Crispim, EcoNordeste. 25/08/2025. https://agenciaeconordeste.com.br/sustentabilidade/movimentos-pedem-ao-mp-fim-de-licenca-de-data-center-no-ce/#:~:text=’N%C3%A3o%20somos%20contra%20o%20progresso’&text=Para%20o%20cacique%20Roberto%20Anac%C3%A9,ao%20meio%20ambiente%E2%80%9D%2C%20finaliza. ChatGPT Is Everywhere — Why Aren’t We Talking About Its Environmental Costs? Lex McMenamin. Teen Vogue. https://www.teenvogue.com/story/chatgpt-is-everywhere-environmental-costs-oped Data centers no Nordeste, minérios na África, lucros no Vale do Silício. Le Monde Diplomatique, 11 jun. 2025. Accioly Filho. https://diplomatique.org.br/data-centers-no-nordeste-minerios-na-africa-lucros-no-vale-do-silicio/. The environmental footprint of data centers in the United States. Md Abu Bakar Siddik et al 2021 Environ. Res. Lett. 16064017: https://iopscience.iop.org/article/10.1088/1748-9326/abfba1 Tecnología en el desierto – El debate por los data centers y la crisis hídrica en Uruguay. MUTA, 30 nov. Soledad Acunã https://mutamag.com/cyberpunk/tecnologia-en-el-desierto/. Acesso em: 17 set. 2025. Las zonas oscuras de la evaluación ambiental que autorizó “a ciegas” el megaproyecto de Google en Cerrillos. CIPER Chile, 25 maio 2020. https://www.ciperchile.cl/2020/05/25/las-zonas-oscuras-de-la-evaluacion-ambiental-que-autorizo-aciegas-el-megaproyecto-de-google-en-cerrillos/. Acesso em: 17 set. 2025. Thirsty data centres spring up in water-poor Mexican town. Context, 6 set. 2024. https://www.context.news/ai/thirsty-data-centres-spring-up-in-water-poor-mexican-town BNDES lança linha de R$ 2 bilhões para data centers no Brasil. https://agenciadenoticias.bndes.gov.br/industria/BNDES-lanca-linha-de-R$-2-bilhoes-para-data-centersno-Brasil/. Los centros de datos y sus costos ocultos en México, Chile, EE UU, Países Bajos y Sudáfrica. WIRED, 29 maio 2025. Anna Lagos https://es.wired.com/articulos/los-costos-ocultos-del-desarrollo-de-centros-de-datos-en-mexico-chile-ee-uu-paises-bajos-y-sudafrica Big Tech's data centres will take water from world's driest areas. Eleanor Gunn. SourceMaterial, 9 abr. 2025. https://www.source-material.org/amazon-microsoft-google-trump-data-centres-water-use/ Indígenas pedem que MP atue para derrubar licenciamento ambiental de data center do TikTok. Folha de S.Paulo, 26 ago. 2025. https://www1.folha.uol.com.br/mercado/2025/08/indigenas-pedem-que-mp-atue-para-derrubar-licenciamento-ambiental-de-data-center-do-tiktok.shtml The data center boom in the desert. MIT Technology Review https://www.technologyreview.com/2025/05/20/1116287/ai-data-centers-nevada-water-reno-computing-environmental-impact/ Conferências, artigos acadêmicos e jornalísticos: Why are Tech Oligarchs So Obsessed with Energy and What Does That Mean for Democracy? Tamara Kneese. Tech Policy Press. https://www.techpolicy.press/why-are-tech-oligarchs-so-obsessed-with-energy-and-what-does-that-mean-for-democracy/ Data Center Boom Risks Health of Already Vulnerable Communities. Cecilia Marrinan. Tech Policy Press. https://www.techpolicy.press/data-center-boom-risks-health-of-already-vulnerable-communities/ RARE/EARTH: The Geopolitics of Critical Minerals and the AI Supply Chain. https://www.youtube.com/watch?v=GxVM3cAxHfg Understanding AI with Data & Society / The Environmental Costs of AI Are Surging – What Now? https://www.youtube.com/watch?v=W4hQFR8Z7k0 IA e data centers: expansão corporativa em tensão com justiça socioambiental. Camila Cristina da Silva, Cynthia Picolo G. de Azevedo. https://www.jota.info/opiniao-e-analise/colunas/ia-regulacao-democracia/ia-e-data-centers-expansao-corporativa-em-tensao-com-justica-socioambiental LI, P.; YANG, J.; ISLAM, M. A.; REN, S. Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. arXiv, 2304.03271, 26 mar. 2025. Disponível em: https://doi.org/10.48550/arXiv.2304.03271 LIU, Y.; WEI, X.; XIAO, J.; LIU, Z.;XU, Y.; TIAN, Y. Energy consumption and emission mitigation prediction based on data center traffic and PUE for global data centers. Global Energy Interconnection, v. 3, n.3, p. 272-282, 3 jun. 2020. https://doi.org/10.1016/j.gloei.2020.07.008 SIDDIK, M. A. B.; SHEHABI, A.; MARSTON, L. The environmental footprint of data centers in the United States. Environmental Research Letters, v. 16, n. 6, 21 maio 2021. https://doi.org/10.1088/1748-9326/abfba1 Las Mentiras de Microsoft en Chile: Una Empresa No tan Verde. Por Rodrigo Vallejos de Resistencia Socioambiental de Quilicura. Revista De Frente, 18 mar. 2022. https://www.revistadefrente.cl/las-mentiras-de-microsoft-en-chile-una-empresa-no-tan-verde-porrodrigo-vallejos-de-resistencia-socioambiental-de-quilicura/. Acesso em: 17 set. 2025.

The Data Center Frontier Show
Scaling AI: Adaptive Reuse, Power-Rich Sites, and the New GPU Frontier

The Data Center Frontier Show

Play Episode Listen Later Dec 10, 2025 60:38


In this panel session from the 2025 Data Center Frontier Trends Summit (Aug. 26-28) in Reston, Va., JLL's Sean Farney moderates a high-energy panel on how the industry is fast-tracking AI capacity in a world of power constraints, grid delays, and record-low vacancy. Under the banner “Scaling AI: The Role of Adaptive Reuse and Power-Rich Sites in GPU Deployment,” the discussion dives into why U.S. colocation vacancy is hovering near 2%, how power has become the ultimate limiter on AI revenue, and what it really takes to stand up GPU-heavy infrastructure at speed. Schneider Electric's Lovisa Tedestedt, Aligned Data Centers' Phill Lawson-Shanks, and Sapphire Gas Solutions' Scott Johns unpack the real-world strategies they're deploying today—from adaptive reuse of industrial sites and factory-built modular systems, to behind-the-fence natural gas, microgrids, and emerging hydrogen and RNG pathways. Along the way, they explore the coming “AI inference edge,” the rebirth of the enterprise data center, and how AI is already being used to optimize data center design and operations. During this talk, you'll learn: * Why record-low vacancy and long interconnection queues are reshaping AI deployment strategy. * How adaptive reuse of legacy industrial and commercial real estate can unlock gigawatt-scale capacity and community benefits. * The growing role of liquid cooling, modular skids, and grid-to-chip efficiency in getting more power to GPUs. * How behind-the-meter gas, virtual pipelines, and microgrids are bridging multi-year grid delays. * Why many experts expect a renaissance of enterprise data centers for AI inference at the edge. Moderator: Sean Farney, VP, Data Centers, Jones Lang LaSalle (JLL) Panelists: Tony Grayson, General Manager, Northstar Lovisa Tedestedt, Strategic Account Executive – Cloud & Service Providers, Schneider Electric Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers Scott Johns, Chief Commercial Officer, Sapphire Gas Solutions

The Ravit Show
What NVIDIA's AI Data Platform Means for Enterprise AI and How Hammerspace Makes It a Reality

The Ravit Show

Play Episode Listen Later Dec 10, 2025 7:36


AI doesn't fail because of GPUs. It fails because of data.I had a blast chatting with Jeff Echols, Vice President, AI and Strategic Partners at Hammerspace, from NVIDIA GTC in Washington. We talked about the part of AI nobody is fixing fast enough: getting data to GPUs at the speed the GPUs need it.Jeff breaks down what makes the Hammerspace AI Data Platform different from traditional AI storage. This isn't “more storage.” It's orchestration. Move data globally. Feed it to the right workload. Keep GPUs busy instead of waiting.We also got into MCP and why an intelligent data control layer is now core to any real AI strategy, plus how Hammerspace lines up with the NVIDIA AI Data Platform reference design so enterprises can actually run this in production, not just in a lab.And we talked Tier 0. If you want GPU ROI, Tier 0 is about one thing: keep the GPUs fed at full speed.If you're trying to scale AI past a pilot, watch this. #data #ai #nvidiagtc #nvidia #hammerspace #gpu #theravitshow

Invest Like the Best with Patrick O'Shaughnessy
Gavin Baker - Nvidia v. Google, Scaling Laws, and the Economics of AI - [Invest Like the Best, EP.451]

Invest Like the Best with Patrick O'Shaughnessy

Play Episode Listen Later Dec 9, 2025 88:19


My guest this week is Gavin Baker. Gavin is the managing partner and CIO of Atreides Management, and he has been on the show many times before.  I will never forget when I first met Gavin in 2017. I find his interest in markets, his curiosity about the world to be as infectious as any investor that I've ever come across. He's encyclopedic on what is going on in the world of technology today, and I've had the good fortune to host him every year or two on this podcast. Gavin began covering Nvidia as an investor more than two decades ago, giving him a rare perspective on how the company – and the entire semiconductor ecosystem – has evolved. A lot has changed since our last conversation a year ago, making this the perfect time to revisit the topic. In this conversation, we talk about everything that interests Gavin – Nvidia's GPUs, Google's TPUs, the changing AI landscape, the math and business models around AI companies and everything in between. We also discussed the idea of data centers in space, which he communicates with his usual passion and logic. In closing, at the end of this conversation, because I've asked him my traditional closing question before, I asked him a different question, which led to a discussion of his entire investing origin story that I had never heard before. Because Gavin is one of the most passionate thinkers and investors that I know, these conversations are always amongst my most favorite. I hope you enjoy this latest in the series of discussions with Gavin Baker. For the full show notes, transcript, and links to mentioned content, check out the episode page ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠ ----- This episode is brought to you by⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Ramp⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ramp.com/invest to sign up for free and get a $250 welcome bonus. ----- This episode is brought to you by⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Ridgeline⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Head to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ridgelineapps.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to learn more about the platform. ----- This episode is brought to you by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠AlphaSense⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Invest Like the Best listeners can get a free trial now at⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Alpha-Sense.com/Invest⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and experience firsthand how AlphaSense and Tegus help you make smarter decisions faster. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://thepodcastconsultant.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠). Show Notes: (00:00:00) Welcome to Invest Like The Best (00:04:00) Meet Gavin Baker (00:06:00) Understanding Gemini 3  (00:09:05) Scaling Laws for Pre-Training (00:12:12) Google v. Nvidia (00:16:52) Google as  Lowest Cost Producer of Tokens (00:28:05)  AI Can Automate Anything that can be Verified (00:34:30) The AI Bear Case: Edge AI (00:37:18) Going from Intelligence to Usefulness (00:43:44) AI Adoption in Fortune 500 Companies (00:48:58) Frontier Models and Industry Dynamics (00:56:40) China's Mistake and Blackwell's Geopolitical Leverage (00:57:50) OpenAI's Code Red (01:00:46) Data Centers in Space (01:07:13) Cycles in AI (01:11:10) Power as a Bottleneck (01:14:17) AI Native Entrepreneurs (01:16:21) Semiconductor VC (01:20:41) The Mistake the SaaS Industry is Making (01:26:50) Series of Bubbles (01:28:56) Whatever AI Needs, It Gets (01:29:57) Investing is the Search for Truth (01:31:24) Gavin's Investing Origin Story

Keepin' The Lights On
Revolutionizing Data Center Cooling with Chatsworth Sponsor Highlight 02

Keepin' The Lights On

Play Episode Listen Later Dec 9, 2025 5:22


In this conversation, Rob Jones, Area Vice President of Sales at Chatsworth Products (CPI), discusses the critical need for advanced cooling solutions in data centers driven by the convergence of AI and high-performance computing (HPC). He highlights CPI's innovative liquid cooling technologies, which are essential for managing the increasing heat output from modern GPUs. The discussion also covers the types of organizations adopting CPI's solutions and how CPI is future-proofing infrastructure to accommodate evolving technology demands.Thank you to Chatsworth Products/CPI for their sponsorship of the podcast. Their expertise and support helps us to the keep the lights on here at the podcast. Please check out their selection of products on Graybar's website or reach out to your local Graybar representative to learn how CPI can help keep your data center efficient and cool.https://www.graybar.com/manufacturers/chatsworth/c/sup-chatsworth-group?utm_source=podcast&utm_medium=show-notes&utm_campaign=Sponsor-Highlight-2025-ChatsworthYouTube link: https://youtu.be/GDTl_Mux9SQ 

Everyday AI Podcast – An AI and ChatGPT Podcast
Aligning AI With Climate And Business Goals

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Dec 5, 2025 27:56


How can you scale AI at the enterprise, yet still hit your climate goals? And can heavy AI usage and an enterprise's ESG mission co-exist? Ashutosh Ahuja lays it out for us. Aligning AI With Climate And Business Goals -- An Everyday AI Chat with Jordan Wilson and Ashutosh AhujaNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion:Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:AI's Environmental Impact and Climate ConcernsCompanies Aligning AI with ESG GoalsAI Adoption Versus Carbon Footprint TradeoffsMetrics for Measuring AI's Environmental ImpactBusiness Efficiency Gains from AI AdoptionReal-World Examples: AI Offsetting Carbon FootprintIndustry Opportunities for Sustainable AI IntegrationFuture Trends: Efficient AI Models and Edge ComputingTimestamps:00:00 Everyday AI Podcast & Newsletter05:52 Balancing Progress and Legacy07:03 "Should Companies Limit AI Usage?"12:02 "Sentiment Analysis for Business Growth"17:07 "Energy Efficiency Impacts ESG Metrics"19:40 Robots, Energy, and AI Opportunity21:41 AI Efficiency and Climate Balance25:04 "Trust Instincts in Investments"Keywords:AI and climate, climate goals, aligning AI with ESG, environmental impact of AI, carbon footprint, energy use in AI data centers, water cooling for GPUs, sustainable business practices, enterprise AI strategy, ESG compliance, climate pledges, AI adoption in business, carbon footprint metrics, machine learning for sustainability, predictive analytics, ethical AI, green AI solutions, renewable energy sector, AI in waste management, camera vision for waste sorting, delivery robots, edge AI, small business AI implementation, AI efficiency, sentiment analysis, customer patterns, predictive maintenance, IoT data, auto scaling, cloud computing, resource optimization, SEC filings, brand sentiment tracking, LLM energy consumption, environmental considerations for AI, future of AI in climate action, business efficiency, human in the loop, philanthropic business practices, sustainable architecture, large language models and climate, tech industry climate initiatives, AI-powered resource savings, operational sustainability.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner 

This Week in Startups
Getting past the “Cardinal Sins of Delegating” with Jonathan Swanson of Athena | E2218

This Week in Startups

Play Episode Listen Later Dec 4, 2025 57:11


This Week In Startups is made possible by:Northwest Registered Agent - https://www.northwestregisteredagent.com/twistCrusoe - http://crusoe.ai/buildGusto - https://www.gusto.com/twistToday's show: Delegating is its own unique skill, requiring training and a real investment of time and attention.On TWiST, Jason chats for a full hour with the founder of one of his favorite startups, Athena, which trains online assistants and pairs them with busy founders and executives. (Jason has 2!) But getting the MOST out of your executive assistants is less obvious than it looks. Jonathan unpacks some of the secrets to “Black Diamond Delegating,” and how he manages to keep 6 different high-level helpers operating at once.Plus, Jason and Jonathan look back at the Open Angel Forum days, where Jason invested in Jonathan's previous company, Thumbtack, praise the “Checklist Manifesto,” discuss the telltale signs you've achieved market pull, and lots more insights.Timestamps:(01:53) We're joined by Jonathan Swanson from one of JCal's fav startups, Athena!(02:02) Jason and Jonathan first met during the Open Angel Forum, when Jonathan was working on Thumbtack(06:44) Finding the “little touches” that can help make an app more delightful(9:47) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(12:05) The shift from Thumbtack to Athena was all about time(12:52) How Jonathan delegates to 6 exec assistants at once(14:22) Pricing Athena's EAs: Jason runs the numbers(15:09) Why Athena made Jason believe in hiring assistants again(18:15) Getting past the “Cardinal Sins of Delegation”(19:38) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(20:48) Will AI ever be able to replace Athena assistants?(23:41) Inside how Athena finds and trains assistants from around the world(27:01) How JCal became an Athena Ambassador… and almost crashed the system!(30:55) Gusto - Check out the online payroll and benefits experts with software built specifically for small business and startups. Try Gusto today and get three months FREE at https://www.gusto.com/twist(32:11) The magic of having assistants work on “backstop projects” and creative tasks(37:14) How to know when you have achieved market pull(40:05) Why getting the most out of delegating takes real investment and training(44:36) More praise for the Checklist Manifesto(46:26) Jonathan gives us a peek at what “Black Diamond Delegation” looks like(52:14) Jason's early experiences hiring overseas assistants, from the Mahalo days*Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcp*Follow Lon:X: https://x.com/lons*Follow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelm/*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis/*Thank you to our partners:(9:47) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(19:38) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(30:55) Gusto - Check out the online payroll and benefits experts with software built specifically for small business and startups. Try Gusto today and get three months FREE at https://www.gusto.com/twist

Run The Numbers
$1B AI Finance: How CoreWeave's CFO Built a Rocket Ship | Nitin Agrawal

Run The Numbers

Play Episode Listen Later Dec 4, 2025 63:39


CoreWeave CFO Nitin Agrawal joins Run the Numbers to unpack the finance engine behind one of the fastest-growing AI infrastructure companies on the planet. CJ and Nitin dive into what it takes to build financial discipline in an environment where business models are being invented in real time, discussing the company's 700% growth last year and massive first-quarter performance as a newly public company. They cover capex strategy, securitizing GPUs, managing billion-dollar revenue backlogs, and structuring incentives for hyperscale deals, all while keeping investors grounded and servers running at full tilt. If you want a front-row seat to finance in the AI arms race, this episode delivers.—SPONSORS:Tipalti automates the entire payables process—from onboarding suppliers to executing global payouts—helping finance teams save time, eliminate costly errors, and scale confidently across 200+ countries and 120 currencies. More than 5,000 businesses already trust Tipalti to manage payments with built-in security and tax compliance. Visit https://www.tipalti.com/runthenumbers to learn more.Aleph automates 90% of manual, error-prone busywork, so you can focus on the strategic work you were hired to do. Minimize busywork and maximize impact with the power of a web app, the flexibility of spreadsheets, and the magic of AI. Get a personalised demo at https://www.getaleph.com/runFidelity Private Shares is the all-in-one equity management platform that keeps your cap table clean, your data room organized, and your equity story clear—so you never risk losing a fundraising round over messy records. Schedule a demo at https://www.fidelityprivateshares.com and mention Mostly Metrics to get 20% off.Sage Intacct is the cloud financial management platform that replaces spreadsheets, eliminates manual work, and keeps your books audit-ready—so you can scale without slowing down. It combines accounting, ERP, and real-time reporting for retail, financial services, logistics, tech, professional services, and more. Sage Intacct delivers fast ROI, with payback in under six months and up to 250% return. Rated #1 in customer satisfaction for eight straight years. Visit Sage Intacct and take control of your growth: https://bit.ly/3Kn4YHtMercury is business banking built for builders, giving founders and finance pros a financial stack that actually works together. From sending wires to tracking balances and approving payments, Mercury makes it simple to scale without friction. Join the 200,000+ entrepreneurs who trust Mercury and apply online in minutes at https://www.mercury.comRightRev automates the revenue recognition process from end to end, gives you real-time insights, and ensures ASC 606 / IFRS 15 compliance—all while closing books faster. For RevRec that auditors actually trust, visit https://www.rightrev.com and schedule a demo.—LINKS:Nitin on LinkedIn: https://www.linkedin.com/in/nitin-agrawal-cloudcfo/Company: https://www.coreweave.com/CJ on LinkedIn: https://www.linkedin.com/in/cj-gustafson-13140948/Mostly metrics: https://www.mostlymetrics.com—RELATED EPISODES:The Art and Science of a Day-One IPO Pop with OneStream Software CFO Bill Koefoedhttps://youtu.be/kYCn7XNkCBcFrom Facebook's Hypergrowth to Daffy's Disruption: A CFO's Playbook for Saying Yeshttps://youtu.be/bRIZ6oNPGD0—TIMESTAMPS:00:00:00 Preview and Intro00:02:54 Sponsors – Tipalti | Aleph | Fidelity Private Shares00:06:12 Interview Begins: Scaling CoreWeave00:06:52 CoreWeave's Pivot From Crypto to AI00:11:41 Why CoreWeave Is Uniquely Positioned to Lead AI Infrastructure00:13:32 Hiring for Both Scrappiness and Scale00:16:01 Post-IPO Whirlwind: Acquisitions, Debt Raises, and 10-Year Deals00:16:43 Sponsors – Sage Intacct | Mercury | RightRev00:20:13 Managing Investor Expectations With Radical Transparency00:22:39 Doubling Active Power in Six Months00:25:19 Risk-Balanced Capital Deployment: Power First, GPUs Second00:27:12 Financing GPUs With Delayed-Draw Facilities00:29:38 CoreWeave Rated Platinum for GPU Cluster Performance00:32:25 Compute as the Bottleneck for AI Growth00:33:47 Explaining Revenue Backlog Shape & Timing00:35:06 The Strength of Reserved Instance Contracts00:36:07 Giving Tight but Honest Guidance00:40:26 How Mega-Deals Require C-Suite Participation00:42:19 Tackling Revenue Concentration Through Diversification00:44:05 Building an AI-Only Cloud, Not a General-Purpose Cloud00:46:27 Capital Markets Muscle: Raising Billions at Speed00:47:47 Accounting Complexity in a Business With No Precedent00:49:33 Even the CFO Must Unlearn Old Cloud Assumptions00:51:29 Scaling Public-Company Processes in 90-Day Cycles00:54:42 The Couch Fire vs. House Fire Framework00:57:17 Balancing Risk Mitigation With Opportunity Seeking01:00:30 No Downtime for ERP Changes During Hypergrowth01:02:33 Why the Team Stays Energized Despite the Chaos#RunTheNumbersPodcast #CFOInsights #Hypergrowth #AIInfrastructure #FinanceStrategy This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit cjgustafson.substack.com

Cables2Clouds
Bots, Bursts, And Bare Metal: Because The Internet Wanted Drama

Cables2Clouds

Play Episode Listen Later Dec 3, 2025 32:56 Transcription Available


Send us a textWe break down Cloudflare's outage, why a small config change caused big waves, and what better guardrails could look like. We then unpack AWS and Google's cross‑cloud link, Megaport's move into bare metal and GPUs, Webex adding deepfake defenses, and a new startup aiming to tune AI networks at microsecond speed.• Cloudflare outage root cause and fallout• Automation guardrails, validation and rollbacks• AWS–Google cross‑cloud connectivity preview• Pricing, routing and policy gaps to watch• Megaport acquires Latitude SH for compute• Bare metal and GPU as a service near clouds• Webex integrates deepfake and fraud detection• Accuracy risks, UX and escalation paths• Apstra founders launch Aria for AI networks• Microburst telemetry, closed‑loop control and SLAsIf you enjoyed this please give us some feedback or share this with a friend we would love to hear from you as well and we will see you in two weeks with another episodePurchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Monthly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Scaling Agentic Inference Across Heterogeneous Compute with Zain Asgar - #757

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Dec 2, 2025 48:44


In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet's approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling. The complete show notes for this episode can be found at https://twimlai.com/go/757.

UK Investor Magazine
Critical metals and AI infrastructure with Majestic Corporation

UK Investor Magazine

Play Episode Listen Later Dec 2, 2025 14:05


The UK Investor Magazine was delighted to welcome Krystal Lai, Head of Communications at Majestic Corporation, back to the Podcast for another insightful discussion around critical minerals and urban mining.This time, we focus on critical metals and AI infrastructure.Download the 'Critical Minerals in AI Infrastructure' report.This episode explores AI infrastructure's material demands and the data centre waste recycling opportunity.We look beyond AI's power consumption to the physical metals driving the technology and their end-of-life value.The conversation examines specific metals in GPUs and AI hardware, assessing how infrastructure buildout affects global metal demand and how recycling is a key element to securing future supply.Krystal details which AI infrastructure metals urban miner Majestic recovers, their current exposure to data centre waste, and the projected growth in this segment.We finish by looking at how Majestic is positioning to capture the opportunity. Hosted on Acast. See acast.com/privacy for more information.

AI and the Future of Work
364: Inside the AI Infrastructure Race: TensorWave CEO Darrick Horton on Power, GPUs and AMD vs NVIDIA.

AI and the Future of Work

Play Episode Listen Later Dec 1, 2025 36:14


Darrick Horton is the CEO and co-founder of TensorWave, the company making waves in AI infrastructure by building high-performance compute on AMD chips. In 2023, he and his team took the unconventional path of bypassing Nvidia, a bold bet that has since paid off with nearly $150 million raised from Magnetar, AMD Ventures, Prosperity7, and others. TensorWave is now operating a dedicated training cluster of around 8,000 AMD Instinct MI325X GPUs and has already hit a $100 million revenue run rate. Darrick is a serial entrepreneur with a track record of building infrastructure companies. Before TensorWave, he co-founded VMAccel, sold Lets Rolo to LifeKey, and co-founded the crypto mining company VaultMiner. He began his career as a mechanical engineer and plasma physicist at Lockheed Martin's Skunk Works, where he worked on nuclear fusion energy. While he studied physics and mechanical engineering at Andrews University, he left early to pursue entrepreneurship and hasn't looked back since.In this conversation we discussed:Why Darrick chose AMD over Nvidia to build TensorWave's AI infrastructure, and how that decision created a competitive advantage in a GPU-constrained marketWhat makes training clusters more versatile than inference clusters, and why TensorWave focused on the former to meet broader customer needsHow Neocloud providers like TensorWave can move faster and innovate more effectively than legacy hyperscalers in deploying next-generation AI infrastructureWhy power, not GPUs, is becoming the biggest constraint in scaling AI workloads, and how data center architecture must evolve to address itWhy Darrick predicts AI architectures will continue to evolve beyond transformers, creating constant shifts in compute demandHow massive increases in model complexity are accelerating the need for green energy, tighter feedback loops, and seamless integration of compute into AI workflowsResources:Subscribe to the AI & The Future of Work NewsletterConnect with Darrick on LinkedInAI fun fact articleOn How the new definition of work

CiscoChat Podcast
Cisco Tech Stories - ep 29 - Rack and Roll - Scaling AI DCs

CiscoChat Podcast

Play Episode Listen Later Dec 1, 2025 46:55


In this episode, Venkat Kirishnamurthy, a principal Architect at Cisco, explains what are the ins and outs of designin an AI datacenter. How is it different from another datacenter ? What kind of scale are we talking about ? What is the throughput required to connect 1000 GPUs together ? Learn more about CX Services and how we can help you design your datacenter: https://www.cisco.com/site/us/en/services/index.html

The {Closed} Session
How to build a global AI infrastructure company

The {Closed} Session

Play Episode Listen Later Dec 1, 2025 34:12


Most enterprises burn millions on idle GPUs while developers wait weeks for access. Haseeb Budhani, CEO of Rafay Systems, built a global GPU orchestration platform after exits at Soha Systems (acquired by Akamai) and brings deep infrastructure expertise to solving the $100B GPU waste crisis. He reveals why 93% of Fortune 500 companies achieve sub-85% GPU utilization, how sovereign AI requirements are driving hundreds of "Neo clouds" globally, and the specific multi-tenancy frameworks that transform expensive compute from sunk cost into competitive advantage.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Spartan Geek
El Flush

Spartan Geek

Play Episode Listen Later Nov 30, 2025 12:38


Flush de la semana con lo mejor en noticias que se dieron en la semanadéjame tu comentario Redes Sociales Oficiales:► https://linktr.ee/DrakSpartanOficialCualquier cosa o situación contactar a Diego Walker:diegowalkercontacto@gmail.comFecha Del Video[30-11-2025]#flush #amd #ram #nvidia #amd #conectores #gpu #fire #conectorgpu#drakspartan #drak #elflush

Big Technology Podcast
NVIDIA Panic Mode?, OpenAI's Funding Hole, Ilya's Mystery Revenue Plan

Big Technology Podcast

Play Episode Listen Later Nov 28, 2025 61:05


Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Black Friday secrets 2) Google may sell its TPUs to Meta and financial institutions 3) Nvidia sends an antsy tweet 4) How does Google's TPU stack up next to NVIDIA's GPUs 5) Could Google package the TPU with cloud services? 6) NVIDIA responds to the criticism 7) HSBC on how much OpenAI needs to earn to cover its investments 8) Thinking about OpenAI's advertising business 9) ChatGPT users lose touch with reality 10) Ilya Sustkever's mysterious product and revenue plans 11) X reveals our locations --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices

The Wall Street Skinny
194. Michael Burry Accuses Meta + Oracle of AI Accounting Fraud...Legit? Depreciation & Valuation Masterclass

The Wall Street Skinny

Play Episode Listen Later Nov 28, 2025 35:55


Send us a textIn this episode of The Skinny on Wall Street, Kristen and Jen unpack the story stirring up markets: Michael Burry's latest warning that Big Tech is overstating earnings by extending the “useful life” assumptions on their GPUs. The conversation becomes a real-time teach-in on depreciation, useful life estimates, GAAP vs. tax depreciation, and how a small shift in an accounting estimate can meaningfully inflate EPS—especially for mega-cap tech stocks that trade heavily on P/E multiples. Kristen walks through exactly how depreciation affects valuation, and why some metrics (like EBITDA) and methodologies (like the DCF) are untouched by the choice of useful life. The big question the duo wrestle with: is Burry identifying a real risk, or is this a nothingburger amplified by market paranoia? From there, Jen shifts to the fixed income landscape ahead of the December Fed meeting—one the central bank must navigate without key data (payrolls and CPI) that won't arrive until after the rate decision. She breaks down how Powell is managing optionality near the end of his term, how the market is pricing a December cut, and what a likely dovish successor (Kevin Hassett) could mean for rates in 2026. They also dig into credit markets: years of high coupons have fueled relentless reinvestment demand, but an uptick in issuance—especially from AI-heavy hyperscalers—may finally rebalance supply and demand. The duo look abroad as well, analyzing the UK's newly announced national property tax and what it signals about global fiscal stress.The episode wraps with big updates from The Wall Street Skinny: the long-awaited launch of their Financial Modeling Course, the continued fixed income course presale, and new January 2026 office hours, plus the return date for HBO's Industry (January 11!). To get 25% off all our self paced courses, use code BLACKFRIDAY25 at checkout!Learn more about 9fin HERE Shop our Self Paced Courses: Investment Banking & Private Equity Fundamentals HEREFixed Income Sales & Trading HERE Wealthfront.com/wss. This is a paid endorsement for Wealthfront. May not reflect others' experiences. Similar outcomes not guaranteed. Wealthfront Brokerage is not a bank. Rate subject to change. Promo terms apply. If eligible for the boosted rate of 4.15% offered in connection with this promo, the boosted rate is also subject to change if base rate decreases during the 3 month promo period.The Cash Account, which is not a deposit account, is offered by Wealthfront Brokerage LLC ("Wealthfront Brokerage"), Member FINRA/SIPC. Wealthfront Brokerage is not a bank. The Annual Percentage Yield ("APY") on cash deposits as of 11/7/25, is representative, requires no minimum, and may change at any time. The APY reflects the weighted average of deposit balances at participating Program Banks, which are not allocated equally. Wealthfront Brokerage sweeps cash balances to Program Banks, where they earn the variable APY. Sources HERE.

Hacker News Recap
November 27th, 2025 | Migrating the main Zig repository from GitHub to Codeberg

Hacker News Recap

Play Episode Listen Later Nov 28, 2025 14:25


This is a recap of the top 10 posts on Hacker News on November 27, 2025. This podcast was generated by wondercraft.ai (00:30): Migrating the main Zig repository from GitHub to CodebergOriginal post: https://news.ycombinator.com/item?id=46064571&utm_source=wondercraft_ai(01:52): Penpot: The Open-Source FigmaOriginal post: https://news.ycombinator.com/item?id=46064757&utm_source=wondercraft_ai(03:14): Tell HN: Happy ThanksgivingOriginal post: https://news.ycombinator.com/item?id=46065955&utm_source=wondercraft_ai(04:36): Linux Kernel ExplorerOriginal post: https://news.ycombinator.com/item?id=46066280&utm_source=wondercraft_ai(05:58): DIY NAS: 2026 EditionOriginal post: https://news.ycombinator.com/item?id=46065034&utm_source=wondercraft_ai(07:20): AI CEO – Replace your boss before they replace youOriginal post: https://news.ycombinator.com/item?id=46072002&utm_source=wondercraft_ai(08:42): Same-day upstream Linux support for Snapdragon 8 Elite Gen 5Original post: https://news.ycombinator.com/item?id=46070668&utm_source=wondercraft_ai(10:04): We're losing our voice to LLMsOriginal post: https://news.ycombinator.com/item?id=46069771&utm_source=wondercraft_ai(11:26): TPUs vs. GPUs and why Google is positioned to win AI race in the long termOriginal post: https://news.ycombinator.com/item?id=46069048&utm_source=wondercraft_ai(12:48): The Nerd Reich – Silicon Valley Fascism and the War on DemocracyOriginal post: https://news.ycombinator.com/item?id=46066482&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

The New Stack Podcast
From Cloud Native to AI Native: Where Are We Going?

The New Stack Podcast

Play Episode Listen Later Nov 28, 2025 44:20


At KubeCon + CloudNativeCon 2025 in Atlanta, the panel of experts - Kate Goldenring of Fermyon Technologies, Idit Levine of Solo.io, Shaun O'Meara of Mirantis, Sean O'Dell of Dynatrace and James Harmison of Red Hat - explored whether the cloud native era has evolved into an AI native era — and what that shift means for infrastructure, security and development practices. Jonathan Bryce of the CNCF argued that true AI-native systems depend on robust inference layers, which have been overshadowed by the hype around chatbots and agents. As organizations push AI to the edge and demand faster, more personalized experiences, Fermyon's Kate Goldenring highlighted WebAssembly as a way to bundle and securely deploy models directly to GPU-equipped hardware, reducing latency while adding sandboxed security.Dynatrace's Sean O'Dell noted that AI dramatically increases observability needs: integrating LLM-based intelligence adds value but also expands the challenge of filtering massive data streams to understand user behavior. Meanwhile, Mirantis CTO Shaun O'Meara emphasized a return to deeper infrastructure awareness. Unlike abstracted cloud native workloads, AI workloads running on GPUs require careful attention to hardware performance, orchestration, and energy constraints. Managing power-hungry data centers efficiently, he argued, will be a defining challenge of the AI native era.Learn more from The New Stack about evolving cloud native ecosystem to an AI native eraCloud Native and AI: Why Open Source Needs Standards Like MCPA Decade of Cloud Native: From CNCF, to the Pandemic, to AICrossing the AI Chasm: Lessons From the Early Days of CloudJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

WAGMI Ventures Podcast
Building the First Economic Layer for AI Compute, with Kony Kwong (GAIB AI)

WAGMI Ventures Podcast

Play Episode Listen Later Nov 27, 2025 35:00


Kony Kwong is the CEO and Co-Founder of GAIB AI, a pioneering platform transforming physical GPUs into a new yield-bearing asset class. As AI drives exponential demand for computational infrastructure, GAIB stands at the forefront of a new financial frontier — where AI and DeFi converge, to provide additional funding channels for cloud/data centers, while offering investors direct access to the explosive AI economy. With a background spanning quantitative trading, machine learning engineering at top-tier firms, and early bets on the AI + blockchain convergence, Kony brings a sharp operator-investor lens to one of the fastest-moving sectors in tech. In this episode, Kony unpacks WAGMI Ventures' thesis on why AI agents are the killer app for crypto in 2025 and beyond, how GAIB is transforming physical GPUs into a new yield-bearing asset class, designing autonomous onchain agents that actually own assets, trade, and compound value, and the unique model of simultaneously building and investing behind its own convictions. He dives deep into the technical and economic breakthroughs needed for truly agentic crypto systems, the massive alpha in AI economies, and how GAIB is positioning itself as building the first economic layer for AI compute, bringing new investment possibilities into this surging sector.

3D InCites Podcast
How Optical Inspection Protects Advanced PCBs

3D InCites Podcast

Play Episode Listen Later Nov 27, 2025 14:44 Transcription Available


A crowded server board with ten thousand parts doesn't forgive sloppy inspection—and neither do pricey GPUs and chiplets. From the floor of Productronica in Munich, we dig into how automated optical inspection keeps advanced packages honest once they hit the PCB line, where solder quality, coplanarity, and sheer component variety can make or break yield. Vidya Vijay from Nordson Test & Inspection joins us to unpack why AOI remains the fastest path to actionable insight, when X‑ray is the smarter choice, and how new sensor design changes the game for reflective, high‑mix assemblies.We explore the real pain points engineers face today: shiny dies that confuse cameras, BGAs packed with I/O where hidden defects hide under the body, and miniature passives that crowd tight keep‑outs. Vidya explains how three‑phase profilometry creates true 3D height maps by projecting fringe patterns and reading them from multiple angles, enabling precise checks for corner fill, underfill, and coplanarity. We also get into multi‑reflection suppression, Nordson's approach to filtering glare and ghost images so the system sees the joint, not the noise. With true RGB on side cameras and higher resolution, AOI can now pick out tiny solder balls and subtle surface issues at speed—fuel for stronger AI autoprogramming and more reliable defect classification.If throughput is king, data is queen. We talk about closing the loop from inspection back to the line to prevent bad lots—flagging stencil drift, placement offsets, and paste issues before they explode into scrap. Then we spotlight Nordson's launched SQ5000 Pro: faster cycle times, a wider field of view, and configurable 7 µm or 10 µm sensors designed for modern PCBA demands. Whether you're chasing yield on high‑value GPUs or balancing AOI with AXI on dense boards, this conversation offers a practical roadmap for choosing the right tool, tackling reflectivity, and using insight to drive predictable quality.Nordson Test and Inspection Delivering best-in-class test, inspection, and metrology solutions for semiconductor applications. Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show

VC10X - Venture Capital Podcast
VC10X Micro - The AI Chip War: Meta Chooses Google Over Nvidia!? ($4T at Stake)

VC10X - Venture Capital Podcast

Play Episode Listen Later Nov 27, 2025 3:41


On November 25th, 2025, Nvidia did something they've never done before: they publicly defended themselves. After reports broke that Meta is negotiating a multi-billion dollar deal to buy Google's AI chips instead of Nvidia's GPUs, Nvidia posted a defensive tweet claiming they're "a generation ahead" of ASICs like Google's TPUs.But if Nvidia is so far ahead, why are they tweeting about it? And why is Meta—their second-largest customer—trying to break free?In this deep dive, we break down the secret war for AI chips, analyze Nvidia's "panic tweet" line by line, and explain why Google is now racing toward a $4 trillion valuation while Nvidia's monopoly crumbles.TIMESTAMPS(0:00) Nvidia's Defensive Tweet(0:45) The Meta Betrayal: Google TPU Deal Explained(1:45) Google's $4 Trillion Comeback & Why TPUs Win(2:44) Is Nvidia in Trouble?(3:14) The Verdict: Confidence or Desperation?KEY TAKEAWAYS✅ Why Meta is negotiating to lease and buy Google TPUs starting in 2026✅ The hidden weakness Nvidia accidentally revealed in their tweet✅ How Google's "ASICs" are 30% faster and 60% more energy-efficient✅ Why spending $50 billion/year makes efficiency matter more than versatility✅ The end of Nvidia's monopoly pricing powerTHE TWEET BREAKDOWNWe analyze Nvidia's November 25th response where they claim superiority over "ASICs" (Google's TPUs), why this is defensive PR, and what it reveals about the shifting power dynamics in AI hardware.SUBSCRIBE FOR MORE VC & STARTUP STRATEGYVC10X breaks down the most important stories in tech, startups, and investing every week. If you want actionable insights to help you build or invest in the next great company, subscribe now.LET'S CONNECTWebsite: https://VC10X.comX / Twitter: https://x.com/choubeysahabLinkedIn: https://linkedin.com/in/choubeysahabCOMMENT BELOWIs Nvidia's tweet confident or desperate? Who wins the battle for Meta: Jensen Huang or Sundar Pichai? Let us know in the comments.#Nvidia #Google #Meta #AIChips #TPU #JensenHuang #SundarPichai #TechNews #VentureCapital #Alphabet

Cloud Streaks
92. Is there an AI bubble? Thoughts On What Markets Are Really Pricing In. Mentioning Warren Buffet, Paul Volcker, Larry Page & More

Cloud Streaks

Play Episode Listen Later Nov 27, 2025 65:35


This blog is the best explanation of AI intelligence increase I've seen: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/ ### Defining Market Bubbles - Traditional definition: 20%+ share price decline with economic slowdown/recession - Alternative perspective: hype/story not matching reality over time (dot-com example) - Duncan's view: share prices ahead of future expectations - Share prices predict future revenue/profit - Decline when reality falls short of predictions ### Historical Bubble Context - Recent cycles analyzed: - COVID (2020) - pandemic-led, quickly reversed with government intervention - GFC (2008) - housing bubble, financial crisis, deeper impact - Tech bubble (1999) - NASDAQ fell 80%, expectations vs reality mismatch - S&L crisis (1992) - mini financial crisis - Volcker era (1980s) - interest rates raised to break inflation ### Current AI Market Dynamics - OpenAI: fastest growing startup ever, $20B revenue run rate in 2 years - Anthropic: grew from $1B to $9B revenue run rate this year - Big tech revenue acceleration through AI-improved ad platform ROI - Key concern: if growth rates plateau, valuations become unsustainable ### Nvidia as Market Bellwether - Central position providing GPUs for data center buildout - Recent earnings beat analyst expectations but share price fell - Market expectations vs analyst expectations are different metrics - 80% of market money judged on 12-month performance vs long-term value creation ### AI Technology Scaling Laws - Intelligence capability doubling every 7 months for 6 years - Progress from 2-second tasks to 90-minute complex programming tasks - Cost per token declining 100x annually on frontier models - Current trajectory: potential for year-long human-equivalent tasks by 2028 ### Investment Scale and Infrastructure - $3 trillion committed to data center construction this year - Power becoming primary bottleneck (not chip supply) - 500-acre solar farms being built around data centers - 7-year backlog on gas turbines, solar+battery fastest deployment option ### Bubble vs Boom Scenarios - Bear case: scaling laws plateau, power constraints limit growth - Short-term revenue slowdown despite long-term potential - Circular investment dependencies create domino effect - Bull case: scaling laws continue, GDP growth accelerates to 5%+ - Current 100% GPU utilization indicates strong demand - Structural productivity gains justify investment levels ### Market Structure Risks - Foundation model layer: 4 roughly equal competitors (OpenAI, Anthropic, Google, XAI) - No clear “winner takes all” dynamic emerging - Private company valuations hard to access for retail investors - Application layer: less concentrated, easier to build sustainable businesses - Chip layer: Nvidia dominance but Google TPUs showing competitive performance

Squawk Pod
Making Turkey & Keeping the Peace on Thanksgiving 11/26/25

Squawk Pod

Play Episode Listen Later Nov 26, 2025 22:58


As Americans prep their Thanksgiving feasts, one hotline is bracing for its busiest day of the year. Nicole Johnson, director of the Butterball Turkey Talk-Line, explains the most common turkey questions. Then, Harvard professor Arthur Brooks shares advice for navigating family dynamics, handling holiday anxiety, and finding common ground at the dinner table. Plus, Nvidia says its GPUs are a generation ahead of Google's AI chips, and Campbell's Soup responds to leaked audio claiming its food is made for “poor people.” Arthur Brooks 13:34Nicole Johnson 21:14 In this episode:Nicole Johnson, @butterballArthur Brooks, @arthurbrooksBecky Quick, @BeckyQuickAndrew Ross Sorkin, @andrewrsorkinCameron Costa, @CameronCostaNY Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
Nvidia Claims Its GPUs Are a "Generation Ahead" of Google Chips

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning

Play Episode Listen Later Nov 26, 2025 7:47


In this episode, we break down Nvidia's bold statement that its latest GPUs outperform Google's AI chips by a full generation. We explore what this means for the AI hardware race and how it could shape future model development.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The MAD Podcast with Matt Turck
What's Next for AI? OpenAI's Łukasz Kaiser (Transformer Co-Author)

The MAD Podcast with Matt Turck

Play Episode Listen Later Nov 26, 2025 65:25


We're told that AI progress is slowing down, that pre-training has hit a wall, that scaling laws are running out of road. Yet we're releasing this episode in the middle of a wild couple of weeks that saw GPT-5.1, GPT-5.1 Codex Max, fresh reasoning modes and long-running agents ship from OpenAI — on top of a flood of new frontier models elsewhere. To make sense of what's actually happening at the edge of the field, I sat down with someone who has literally helped define both of the major AI paradigms of our time.Łukasz Kaiser is one of the co-authors of “Attention Is All You Need,” the paper that introduced the Transformer architecture behind modern LLMs, and is now a leading research scientist at OpenAI working on reasoning models like those behind GPT-5.1. In this conversation, he explains why AI progress still looks like a smooth exponential curve from inside the labs, why pre-training is very much alive even as reinforcement-learning-based reasoning models take over the spotlight, how chain-of-thought actually works under the hood, and what it really means to “train the thinking process” with RL on verifiable domains like math, code and science. We talk about the messy reality of low-hanging fruit in engineering and data, the economics of GPUs and distillation, interpretability work on circuits and sparsity, and why the best frontier models can still be stumped by a logic puzzle from his five-year-old's math book.We also go deep into Łukasz's personal journey — from logic and games in Poland and France, to Ray Kurzweil's team, Google Brain and the inside story of the Transformer, to joining OpenAI and helping drive the shift from chatbots to genuine reasoning engines. Along the way we cover GPT-4 → GPT-5 → GPT-5.1, post-training and tone, GPT-5.1 Codex Max and long-running coding agents with compaction, alternative architectures beyond Transformers, whether foundation models will “eat” most agents and applications, what the translation industry can teach us about trust and human-in-the-loop, and why he thinks generalization, multimodal reasoning and robots in the home are where some of the most interesting challenges still lie.OpenAIWebsite - https://openai.comX/Twitter - https://x.com/OpenAIŁukasz KaiserLinkedIn - https://www.linkedin.com/in/lukaszkaiser/X/Twitter - https://x.com/lukaszkaiserFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold open and intro(01:29) – “AI slowdown” vs a wild week of new frontier models(08:03) – Low-hanging fruit: infra, RL training and better data(11:39) – What is a reasoning model, in plain language?(17:02) – Chain-of-thought and training the thinking process with RL(21:39) – Łukasz's path: from logic and France to Google and Kurzweil(24:20) – Inside the Transformer story and what “attention” really means(28:42) – From Google Brain to OpenAI: culture, scale and GPUs(32:49) – What's next for pre-training, GPUs and distillation(37:29) – Can we still understand these models? Circuits, sparsity and black boxes(39:42) – GPT-4 → GPT-5 → GPT-5.1: what actually changed(42:40) – Post-training, safety and teaching GPT-5.1 different tones(46:16) – How long should GPT-5.1 think? Reasoning tokens and jagged abilities(47:43) – The five-year-old's dot puzzle that still breaks frontier models(52:22) – Generalization, child-like learning and whether reasoning is enough(53:48) – Beyond Transformers: ARC, LeCun's ideas and multimodal bottlenecks(56:10) – GPT-5.1 Codex Max, long-running agents and compaction(1:00:06) – Will foundation models eat most apps? The translation analogy and trust(1:02:34) – What still needs to be solved, and where AI might go next

MKT Call
Stocks Rally On More Hope Of December Rate Cut

MKT Call

Play Episode Listen Later Nov 25, 2025 8:09


MRKT Matrix - Tuesday, November 25th Dow rallies nearly 700 points on increased hope for a December rate cut (CNBC) Consumer confidence hits lowest point since April as job worries grow (CNBC) Market Volatility Underscores Epic Buildup of Global Risk (NYTimes) Nvidia says its GPUs are a ‘generation ahead' of Google's AI chips (CNBC) Google, the Sleeping Giant in Global AI Race, Now ‘Fully Awake' (Bloomberg) OpenAI needs to raise at least $207bn by 2030 so it can continue to lose money, HSBC estimates (FT) Oracle-Linked Borrowing Binge Worries Lenders (The Information) Private Credit's Sketchy Marks Get Warning Shot From Wall Street's Top Cop (Bloomberg) --- Subscribe to our newsletter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://riskreversalmedia.beehiiv.com/subscribe⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ MRKT Matrix by RiskReversal Media is a daily AI powered podcast bringing you the top stories moving financial markets Story curation by RiskReversal, scripts by Perplexity Pro, voice by ElevenLabs

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
After LLMs: Spatial Intelligence and World Models — Fei-Fei Li & Justin Johnson, World Labs

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Nov 25, 2025


Fei-Fei Li and Justin Johnson are cofounders of World Labs, who have recently launched Marble (https://marble.worldlabs.ai/), a new kind of generative “world model” that can create editable 3D environments from text, images, and other spatial inputs. Marble lets creators generate persistent 3D worlds, precisely control cameras, and interactively edit scenes, making it a powerful tool for games, film, VR, robotics simulation, and more. In this episode, Fei-Fei and Justin share how their journey from ImageNet and Stanford research led to World Labs, why spatial intelligence is the next frontier after LLMs, and how world models could change how machines see, understand, and build in 3D. We discuss: The massive compute scaling from AlexNet to today and why world models and spatial data are the most compelling way to “soak up” modern GPU clusters compared to language alone. What Marble actually is: a generative model of 3D worlds that turns text and images into editable scenes using Gaussian splats, supports precise camera control and recording, and runs interactively on phones, laptops, and VR headsets. Fei-fei's essay (https://drfeifei.substack.com/p/from-words-to-worlds-spatial-intelligence) on spatial intelligence as a distinct form of intelligence from language: from picking up a mug to inferring the 3D structure of DNA, and why language is a lossy, low-bandwidth channel for describing the rich 3D/4D world we live in. Whether current models “understand” physics or just fit patterns: the gap between predicting orbits and discovering F=ma, and how attaching physical properties to splats and distilling physics engines into neural networks could lead to genuine causal reasoning. The changing role of academia in AI, why Fei-Fei worries more about under-resourced universities than “open vs closed,” and how initiatives like national AI compute clouds and open benchmarks can rebalance the ecosystem. Why transformers are fundamentally set models, not sequence models, and how that perspective opens up new architectures for world models, especially as hardware shifts from single GPUs to massive distributed clusters. Real use cases for Marble today: previsualization and VFX, game environments, virtual production, interior and architectural design (including kitchen remodels), and generating synthetic simulation worlds for training embodied agents and robots. How spatial intelligence and language intelligence will work together in multimodal systems, and why the goal isn't to throw away LLMs but to complement them with rich, embodied models of the world. Fei-Fei and Justin's long-term vision for spatial intelligence: from creative tools for artists and game devs to broader applications in science, medicine, and real-world decision-making. — Fei-Fei Li X: https://x.com/drfeifei LinkedIn: https://www.linkedin.com/in/fei-fei-li-4541247 Justin Johnson X: https://x.com/jcjohnss LinkedIn: https://www.linkedin.com/in/justin-johnson-41b43664 Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction and the Fei-Fei Li & Justin Johnson Partnership 00:02:00 From ImageNet to World Models: The Evolution of Computer Vision 00:12:42 Dense Captioning and Early Vision-Language Work 00:19:57 Spatial Intelligence: Beyond Language Models 00:28:46 Introducing Marble: World Labs' First Spatial Intelligence Model 00:33:21 Gaussian Splats and the Technical Architecture of Marble 00:22:10 Physics, Dynamics, and the Future of World Models 00:41:09 Multimodality and the Interplay of Language and Space 00:37:37 Use Cases: From Creative Industries to Robotics and Embodied AI 00:56:58 Hiring, Research Directions, and the Future of World Labs

All TWiT.tv Shows (MP3)
Untitled Linux Show 230: Bake The Man a Pie

All TWiT.tv Shows (MP3)

Play Episode Listen Later Nov 24, 2025 90:05 Transcription Available


This week Qualcomm is back, and maybe everything is terrible with Arduino. Valve has been funding more Open Source work, and we're reading those tea leaves. Blender is out, AMD is writing code for their next-gen GPUs, and there's finally a remote access solution for Wayland. For tips, we have LibrePods for better AirPod support on Linux, paru for an easier time with the Arch User Repository, and the Zork snap to celebrate this newly Open-Sourced game from yesteryear. You can find the show notes at https://bit.ly/49uSNCy and have a great week! Host: Jonathan Bennett Co-Hosts: Jeff Massie and Rob Campbell Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

All TWiT.tv Shows (Video LO)
Untitled Linux Show 230: Bake The Man a Pie

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Nov 24, 2025 90:05 Transcription Available


This week Qualcomm is back, and maybe everything is terrible with Arduino. Valve has been funding more Open Source work, and we're reading those tea leaves. Blender is out, AMD is writing code for their next-gen GPUs, and there's finally a remote access solution for Wayland. For tips, we have LibrePods for better AirPod support on Linux, paru for an easier time with the Arch User Repository, and the Zork snap to celebrate this newly Open-Sourced game from yesteryear. You can find the show notes at https://bit.ly/49uSNCy and have a great week! Host: Jonathan Bennett Co-Hosts: Jeff Massie and Rob Campbell Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Breaking Analysis with Dave Vellante
Resetting GPU Depreciation — Why AI Factories Bend, But Don't Break, Useful Life Assumptions

Breaking Analysis with Dave Vellante

Play Episode Listen Later Nov 24, 2025 10:56


Much attention has been focused in the news on the useful life of GPUs. While the pervasive narrative suggests GPUs have a short lifespan, and operators are “cooking the books,” our research suggests that GPUs, like CPUs before, have a significantly longer useful life than many claim.

The Cloudcast
After NVIDIA's Latest Earnings

The Cloudcast

Play Episode Listen Later Nov 23, 2025 22:47


Once again NVIDIA had a record earnings quarter (Q3FY26), but the strength of their on-going success will be dependent on many factors that may or may not be within their control. Let's explore those broader factors.SHOW: 978SHOW TRANSCRIPT: The Cloudcast #978 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW SPONSORS:[TestKube] TestKube is Kubernetes-native testing platform, orchestrating all your test tools, environments, and pipelines into scalable workflows empowering Continuous Testing. Check it out at TestKube.io/cloudcast[Mailtrap] Try Mailtrap for free[Interconnected] Interconnected is a new series from Equinix diving into the infrastructure that keeps our digital world running. With expert guests and real-world insights, we explore the systems driving AI, automation, quantum, and more. Just search “Interconnected by Equinix”.SHOW NOTES:NVIDIA Earnings (Q3FY2026 - November 2025)WHAT WILL BE THE NEW METRICS AND MILESTONES TO TRACK?Customer Revenues (e.g. CoreWeave, OpenAI)“Alternatives” Revenues (e.g. Google/TPUs, AMD, China, etc.)Customer Success Stories (%ROI, Business Differentiation, Business Acceleration)Growth of Data Centers (e.g. buildouts, zoning approvals, etc.)Electricity Buildouts (e.g. nuclear, coal, alternative, regulatory changes, municipality adoption)Accounting Deep-Dives into NVIDIA (not fraud, but days receivables, inventory buybacks, etc.)$500B in back orders (Oracle, Microsoft, OpenAI, GrokAI)FEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod

The Rundown
What Nvidia's Perfect Quarter, Google Gemini 3, and Interest Rates Reveal About the AI Trade (ft. Doug O'Laughlin)

The Rundown

Play Episode Listen Later Nov 23, 2025 28:36


Is the AI trade cooling off, or is this the calm before an even more explosive phase of the build-out? Zaid sits down with Doug O'Loughlin, President at SemiAnalysis, to break down Nvidia's earnings report, why AI spending might actually accelerate from here, how Google's Gemini 3 and in-house TPUs have suddenly shifted the competitive landscape, and why founders like Zuck are willing to pour billions into GPUs even as investors panic. They dive into the real useful life of Nvidia chips, whether OpenAI's shine is fading, how rate cuts fuel the “AI party,” and what happens when hyperscalers start borrowing real money to ramp capex. Plus: why Oracle might actually be betting the company, and Doug's call on when OpenAI could IPO.

This Week in Startups
Jason predicts a “major M&A moment” in the next six months! | E2213

This Week in Startups

Play Episode Listen Later Nov 22, 2025 55:35


Register here to join Founder University Japan's kickoff: https://luma.com/cm0x90mkToday's show:Google and Meta had their cases dismissed (or received a slap on the wrist)… Despite all the backlash and cynicism, AI companies continue making bank and releasing hot new products… What does it all mean?For Jason Calacanis, the signs are pointing to a “major M&A moment,” with huge opportunities for increased efficiency and consolidation among America's favorite brands and largest companies?Who will it be? Join Jason and Alex for a round of hot speculation.PLUS why Jason thinks Michael Burry is both right and wrong about GPU depreciation, why NOTHING is certain about these OpenAI mega-deals, Google's Nano Banana Pro can make infographics and they're VERY impressive… and much more.Timestamps:(1:54) Jason's calling in from Vegas… He's doing a hot lap at F1!(3:18) How restaurants are becoming the new Hot IP(6:50) Founder University is heading to TOKYO!(9:27) Why Jason thinks the future of startups is truly global(10:06) Pipedrive - Bring your entire sales process into one elegant space. Get started with a 30 day free trial at pipedrive.com/twist(11:39) Nvidia killed it on the numbers… but what are the vibes around AI? Jason sounds off.(13:05) Why nothing is certain when it comes to the Nvidia/OpenAI deal(19:40) Is Google now WINNING consumer adoption of AI? How did it get this close?(19:57) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(26:07) Meanwhile, AI apps are still dominating the iOS Store(27:09) Why Jason and Alex think Michael Burry's both right and wrong about GPU depreciation(30:13) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(37:46) We're testing out Nano Banana Pro on a BBQ infographic challenge(43:42) What a week for AI models! It doesn't seem like things are slowing down…(46:12) Kalshi is growing fast, but can it catch Polymarket?(47:50) Is a rate cut coming? Jason and Alex read the tea leaves.(50:13) Why Jason predicts a “major M&A moment” in the next six months(52:09) VIEWER QUESTION: What should a software engineer be working on RIGHT NOW.(54:02) Founder Friday is now… STARTUP SUPPER CLUBSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: ⁠https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(10:06) Pipedrive - Bring your entire sales process into one elegant space. Get started with a 30 day free trial at pipedrive.com/twist(19:57) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(30:13) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.com

Hanselminutes - Fresh Talk and Tech for Developers
C++ is Still Here, Still Powerful with Gabriel Dos Reis

Hanselminutes - Fresh Talk and Tech for Developers

Play Episode Listen Later Nov 20, 2025 34:25


In a world of Rust, Go, and Python, why does C++ still matter? Dr. Gabriel Dos Reis joins Scott to explain how C++ continues to shape everything from GPUs and browsers to AI infrastructure. They talk about performance, predictability, and the art of balancing power with safety...and how the language's constant evolution keeps it relevant four decades in.

IT Visionaries
How to Maximize ROI on AI in 2026

IT Visionaries

Play Episode Listen Later Nov 20, 2025 59:51


The promise of agentic AI has been massive, autonomous systems that act, reason, and make business decisions, but most enterprises are still struggling to see results.In this episode, host Chris Brandt sits down with Sumeet Arora, Chief Product Officer at Teradata, to unpack why the gap exists between AI hype and actual impact, and what it takes to make AI scale, explainable, and ROI-driven.From the shift toward “AI with ROI” to the new era of human + AI systems and data quality challenges, Sumeet shares how leading enterprises are moving from flashy demos to measurable value and trust in the next phase of AI. CHAPTER MARKERS00:00 The AI Hackathon Era03:10 Hype vs Reality in Agentic AI06:05 Redesigning the Human AI Interface09:15 From Demos to Real Economic Outcomes12:20 Why Scaling AI Still Fails15:05 The Importance of AI Ready Knowledge18:10 Data Quality and the Biggest Bottleneck20:46 Building the Customer 360 Knowledge Layer23:35 Push vs Pull Systems in Modern AI26:15 Rethinking Enterprise Workflows29:20 AI Agents and Outcome Driven Design32:45 Where Agentic AI Works Today36:10 What Enterprises Still Get Wrong39:30 How AI Changes Engineering Priorities55:49 The Future of GPUs and Efficiency Challenges -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Tech Blog Writer Podcast
3491: From NHL Ice to Enterprise Data: Ataccama's CEO on Building AI That Actually Works

The Tech Blog Writer Podcast

Play Episode Listen Later Nov 19, 2025 30:58


What happens when a former NHL player who once faced Wayne Gretzky ends up running a global data company that sits at the center of the AI boom? That question kept coming back to me as I reconnected with Mike McKee, the CEO of Ataccama, seven years after our last conversation. So much has shifted in the world since then, yet the theme that shaped this discussion felt surprisingly grounded. None of the big promises of AI can take hold unless leaders can rely on the data sitting underneath every system they run. Mike brings a rare mix of stories and experience to this theme. His journey from the ice to the C suite feels like its own lesson in discipline, teamwork, and patience, and he openly reflects on the way those early years influence how he leads today. But the heart of this conversation sits in the reality he sees inside global enterprises. Everyone is racing to build AI powered services, yet the biggest blockers are messy records, inconsistent metadata, long forgotten databases, and years of quality issues that were never addressed. It is a blunt problem, and Mike explains why the companies winning with AI right now are the ones treating data trust as a foundation rather than an afterthought. Across the discussion, he shares stories from organisations like T Mobile and Prudential, where millions of records, thousands of systems, and vast volumes of structured and unstructured data must be monitored, understood, and governed in real time. Mike walks through how teams build confidence in their data again, why quality scores matter, and how automation now shapes everything from compliance to customer retention. What stood out most is how quickly the expectations have shifted. Boards and CEOs now treat data as a strategic asset rather than an operational chore, and entire roles have emerged above the chief data officer to steer these programmes. This episode is also a reminder that AI progress is never only about models or GPUs. Mike pulls back the curtain on why organisations struggle to measure AI readiness, how they can avoid bottlenecks, and what it takes to prioritise the work that actually moves the needle. His point is simple. Without trustworthy data, AI remains a promise rather than a practical tool. With it, businesses can act with confidence, respond faster, and make decisions that genuinely improve outcomes for customers and employees. So as AI reaches deeper into systems everywhere, how should leaders rethink their approach to data trust, governance, and quality? And if you have been on your own journey with data challenges, where have you seen progress and where are you still stuck? I would love to hear your thoughts. Tech Talks Daily is Sponsored by NordLayer: Get the exclusive Black Friday offer: 28% off NordLayer yearly plans with the coupon code: techdaily-28. Valid until December 10th, 2025. Try it risk-free with a 14-day money-back guarantee.

Alpha Exchange
Jordi Visser, CEO of Visser Labs and Head of AI Macro Research at 22V

Alpha Exchange

Play Episode Listen Later Nov 18, 2025 55:48


On this episode of the Alpha Exchange, I'm pleased to welcome back Jordi Visser, CEO of Visser Labs and Head of AI Macro Research at 22V. Our conversation centers on one of the most consequential themes in markets today: the intersection of artificial intelligence, exponential innovation, and market structure. With Nvidia's historic rise as a backdrop and AI's increasing integration into every sector, Jordi pushes back on the tendency to label this cycle a “bubble,” arguing that AI is more akin to electricity — an enabling technology whose applications will permeate everyday life. Demand for compute remains effectively infinite, he notes, and the supply shortfalls in GPUs, data centers, and power capacity shape how investors should think about the buildout phase.Jordi also lays out a framework for navigating volatility in sectors tied to AI buildout — including how to handle 20–30% drawdowns — and why estimate revisions matter more than multiple expansion from here. Beyond markets, we explore the labor dynamics of exponential technology: the K-shaped economy, margin pressure at retailers, and why he believes labor participation will keep drifting lower even without mass layoffs.Finally, we examine the policy environment. Here Jordi asserts that the Fed's framework is backward looking and misses how humanoids, robotaxis, and accelerated drug discovery may drive deflationary pressures.I hope you enjoy this episode of the Alpha Exchange, my conversation with Jordi Visser.

Thoughts on the Market
Europe in the Global AI Race

Thoughts on the Market

Play Episode Listen Later Nov 13, 2025 11:29


Live from Morgan Stanley's European Tech, Media and Telecom conference in Barcelona, our roundtable of analysts discuss artificial intelligence in Europe, and how the region could enable the Agentic AI wave.Read more insights from Morgan Stanley.----- Transcript -----Paul Walsh: Welcome to Thoughts on the Market. I'm Paul Walsh, Morgan Stanley's European head of research product. We are bringing you a special episode today live from Morgan Stanley's, 25th European TMT Conference, currently underway. The central theme we're focused on: Can Europe keep up from a technology development perspective?It's Wednesday, November the 12th at 8:00 AM in Barcelona. Earlier this morning I was live on stage with my colleagues, Adam Wood, Head of European Technology and Payments, Emmet Kelly, Head of European Telco and Data Centers, and Lee Simpson, Head of European Technology Hardware. The larger context of our conversation was tech diffusion, one of our four key themes that we've identified at Morgan Stanley Research for 2025. For the panel, we wanted to focus further on agentic AI in Europe, AI disruption as well as adoption, and data centers. We started off with my question to Adam. I asked him to frame our conversation around how Europe is enabling the Agentic AI wave. Adam Wood: I mean, I think obviously the debate around GenAI, and particularly enterprise software, my space has changed quite a lot over the last three to four months. Maybe it's good if we do go back a little bit to the period before that – when everything was more positive in the world. And I think it is important to think about, you know, why we were excited, before we started to debate the outcomes. And the reason we were excited was we've obviously done a lot of work with enterprise software to automate business processes. That's what; that's ultimately what software is about. It's about automating and standardizing business processes. They can be done more efficiently and more repeatably. We'd done work in the past on RPA vendors who tried to take the automation further. And we were getting numbers that, you know, 30 – 40 percent of enterprise processes have been automated in this way. But I think the feeling was it was still the minority. And the reason for that was it was quite difficult with traditional coding techniques to go a lot further. You know, if you take the call center as a classic example, it's very difficult to code what every response is going to be to human interaction with a call center worker. It's practically impossible. And so, you know, what we did for a long time was more – where we got into those situations where it was difficult to code every outcome, we'd leave it with labor. And we'd do the labor arbitrage often, where we'd move from onshore workers to offshore workers, but we'd still leave it as a relatively manual process with human intervention in it. I think the really exciting thing about GenAI is it completely transforms that equation because if the computers can understand natural human language, again to our call center example, we can train the models on every call center interaction. And then first of all, we can help the call center worker predict what the responses are going to be to incoming queries. And then maybe over time we can even automate that role. I think it goes a lot further than, you know, call center workers. We can go into finance where a lot of work is still either manual data re-entry or a remediation of errors. And again, we can automate a lot more of those tasks. That's obviously where, where SAP's involved. But basically what I'm trying to say is if we expand massively the capabilities of what software can automate, surely that has to be good for the software sector that has to expand the addressable markets of what software companies are going to be able to do. Now we can have a secondary debate around: Is it going to be the incumbents, is it going to be corporates that do more themselves? Is it going to be new entrants that that benefit from this? But I think it's very hard to argue that if you expand dramatically the capabilities of what software can do, you don't get a benefit from that in the sector. Now we're a little bit more consumer today in terms of spending, and the enterprises are lagging a little bit. But I think for us, that's just a question of timing. And we think we'll see that come through.I'll leave it there. But I think there's lots of opportunities in software. We're probably yet to see them come through in numbers, but that shouldn't mean we get, you know, kind of, we don't think they're going to happen. Paul Walsh: Yeah. We're going to talk separately about AI disruption as we go through this morning's discussion. But what's the pushback you get, Adam, to this notion of, you know, the addressable market expanding? Adam Wood: It's one of a number of things. It's that… And we get onto the kind of the multiple bear cases that come up on enterprise software. It would be some combination of, well, if coding becomes dramatically cheaper and we can set up, you know, user interfaces on the fly in the morning, that can query data sets; and we can access those data sets almost in an automated way. Well, maybe companies just do this themselves and we move from a world where we've been outsourcing software to third party software vendors; we do more of it in-house. That would be one. The other one would be the barriers to entry of software have just come down dramatically. It's so much easier to write the code, to build a software company and to get out into the market. That it's going to be new entrants that challenge the incumbents. And that will just bring price pressure on the whole market and bring… So, although what we automate gets bigger, the price we charge to do it comes down. The third one would be the seat-based pricing issue that a lot of software vendors to date have expressed the value they deliver to customers through. How many seats of the software you have in house. Well, if we take out 10 – 20 percent of your HR department because we make them 10, 20, 30 percent more efficient. Does that mean we pay the software vendor 10, 20, 30 percent less? And so again, we're delivering more value, we're automating more and making companies more efficient. But the value doesn't accrue to the software vendors. It's some combination of those themes I think that people would worry about. Paul Walsh: And Lee, let's bring you into the conversation here as well, because around this theme of enabling the agentic AI way, we sort of identified three main enabler sectors. Obviously, Adam's with the software side. Cap goods being the other one that we mentioned in the work that we've done. But obviously semis is also an important piece of this puzzle. Walk us through your thoughts, please. Lee Simpson: Sure. I think from a sort of a hardware perspective, and really we're talking about semiconductors here and possibly even just the equipment guys, specifically – when seeing things through a European lens. It's been a bonanza. We've seen quite a big build out obviously for GPUs. We've seen incredible new server architectures going into the cloud. And now we're at the point where we're changing things a little bit. Does the power architecture need to be changed? Does the nature of the compute need to change? And with that, the development and the supply needs to move with that as well. So, we're now seeing the mantle being picked up by the AI guys at the very leading edge of logic. So, someone has to put the equipment in the ground, and the equipment guys are being leaned into. And you're starting to see that change in the order book now. Now, I labor this point largely because, you know, we'd been seen as laggards frankly in the last couple of years. It'd been a U.S. story, a GPU heavy story. But I think for us now we're starting to see a flipping of that and it's like, hold on, these are beneficiaries. And I really think it's 'cause that bow wave has changed in logic. Paul Walsh: And Lee, you talked there in your opening remarks about the extent to which obviously the focus has been predominantly on the U.S. ways to play, which is totally understandable for global investors. And obviously this has been an extraordinary year of ups and downs as it relates to the tech space. What's your sense in terms of what you are getting back from clients? Is the focus shifts may be from some of those U.S. ways to play to Europe? Are you sensing that shift taking place? How are clients interacting with you as it relates to the focus between the opportunities in the U.S. and Asia, frankly, versus Europe? Lee Simpson: Yeah. I mean, Europe's coming more into debate. It's more; people are willing to talk to some of the players. We've got other players in the analog space playing into that as well. But I think for me, if we take a step back and keep this at the global level, there's a huge debate now around what is the size of build out that we need for AI? What is the nature of the compute? What is the power pool? What is the power budgets going to look like in data centers? And Emmet will talk to that as well. So, all of that… Some of that argument's coming now and centering on Europe. How do they play into this? But for me, most of what we're finding people debate about – is a 20-25 gigawatt year feasible for [20]27? Is a 30-35 gigawatt for [20]28 feasible? And so, I think that's the debate line at this point – not so much as Europe in the debate. It's more what is that global pool going to look like? Paul Walsh: Yeah. This whole infrastructure rollout's got significant implications for your coverage universe… Lee Simpson: It does. Yeah. Paul Walsh: Emmet, it may be a bit tangential for the telco space, but was there anything you wanted to add there as it relates to this sort of agentic wave piece from a telco's perspective? Emmet Kelly: Yeah, there's a consensus view out there that telcos are not really that tuned into the AI wave at the moment – just from a stock market perspective. I think it's fair to say some telcos have been a source of funds for AI and we've seen that in a stock market context, especially in the U.S. telco space, versus U.S. tech over the last three to six months, has been a source of funds. So, there are a lot of question marks about the telco exposure to AI. And I think the telcos have kind of struggled to put their case forward about how they can benefit from AI. They talked 18 months ago about using chatbots. They talked about smart networks, et cetera, but they haven't really advanced their case since then. And we don't see telcos involved much in the data center space. And that's understandable because investing in data centers, as we've written, is extremely expensive. So, if I rewind the clock two years ago, a good size data center was 1 megawatt in size. And a year ago, that number was somewhere about 50 to 100 megawatts in size. And today a big data center is a gigawatt. Now if you want to roll out a 100 megawatt data center, which is a decent sized data center, but it's not huge – that will cost roughly 3 billion euros to roll out. So, telcos, they've yet to really prove that they've got much positive exposure to AI. Paul Walsh: That was an edited excerpt from my conversation with Adam, Emmet and Lee. Many thanks to them for taking the time out for that discussion and the live audience for hearing us out.We will have a concluding episode tomorrow where we dig into tech disruption and data center investments. So please do come back for that very topical conversation. As always, thanks for listening. Let us know what you think about this and other episodes by leaving us a review wherever you get your podcasts. And if you enjoy Thoughts on the Market, please tell a friend or colleague to tune in today.