POPULARITY
Categories
This Week In Startups is made possible by:Caldera + Lab - http://calderalab.com/TWISTCrusoe Cloud - https://crusoe.ai/buildUber - http://uber.com/twistToday's show: Why did a power outage in the Bay Area cause Waymos to pile up on city streets?Jason was actually in San Francisco to take in the spectacle of Waymos blocking traffic. But why did this happen? And can we look forward to a day when automated cars are more graceful and coordinated than ballet dancers performing “Swan Lake”? We're asking the tough (and also culturally erudite) questions!PLUS self-driving cars are coming to London, Coinbase's buying spree continues, another entrant in our nearly-complete Gamma Pitch Deck Competition, AND why Jason predicts that Google is going to buy UBER!You won't want to miss this holiday TWiST!Timestamps:(00:00) It's a holiday TWiST! Jason's calling in from vacay in Lake Tahoe.(03:11) Jason was in SF for the great Waymo power outage!(06:06) Why Jason says one day Waymos will be better coordinated than dancers in “Swan Lake”(07:33) We predicted Starlink coming to every Tesla nearly 3 years ago!(09:05) Caldera + Lab: Whether you're starting fresh or upgrading your routine, Caldera Lab makes skincare simple and effective. Head to http://calderalab.com/TWIST and use TWIST at checkout for 20% off your first order.(11:29) Jason calls what Tesla's Optimus team is planning “otherworldly”(14:39) Why Jason thinks we're all going to live in an “Opt-In Truman Show” someday soon(18:52) Baidu, Lyft, and Uber bring self-driving cars to London… they don't have them already?!(20:49) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(24:02) When do we get to 50% of all rides being done by autonomous vehicles… and how many robotaxis will that take?(27:50) Why Jason thinks Google is going to buy… UBER?!(31:12) Uber AI Solutions: Your trusted partner to get AI to work in the real world. Book a demo with them TODAY at http://uber.com/twist(32:30) Will it eventually come down to which car can drive a mile for the cheapest?(35:08) Coinbase picks up The Clearing Company, which makes frameworks for prediction markets(39:50) How much did the biggest AI models improve this year?(44:09) Who's going to actually buy Warner Bros? We're checking the Polymarket.(47:06) GAMMA PITCH w/ Jonathan Sherman of Lumix Ads(51:00) Why Jason says Jonathan's pitch is a 9.5 out of 10(52:50) What Jason looks for in a founder: “a big audacious vision”(54:18) How Lumix (safely) collects users' “mobile ad ID” on the go to identify themSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.com/Check out the TWIST500: https://twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcp*Follow Lon:X: https://x.com/lons*Follow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelm/*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis/*Thank you to our partners:(09:05) Caldera + Lab: Whether you're starting fresh or upgrading your routine, Caldera Lab makes skincare simple and effective. Head to http://calderalab.com/TWIST and use TWIST at checkout for 20% off your first order.(20:49) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(31:12) Uber AI Solutions: Your trusted partner to get AI to work in the real world. Book a demo with them TODAY at http://uber.com/twist
Building artificial intelligence tools requires a lot of graphic processing units, and those GPUs need huge amounts of ultra-fast memory to feed them data. Micron Technology is one of a handful of memory chip makers that has been selling a whole lot of memory, thanks to the AI boom.Plus, cloud company Oracle's data center debt is coming under scrutiny. And Merriam-Webster names the word of the year for 2025: slop.Marketplace's Meghan McCarty Carino spoke with Anita Ramaswamy, columnist at The Information, to learn more on this week's Marketplace Tech Bytes: Week in Review.
Building artificial intelligence tools requires a lot of graphic processing units, and those GPUs need huge amounts of ultra-fast memory to feed them data. Micron Technology is one of a handful of memory chip makers that has been selling a whole lot of memory, thanks to the AI boom.Plus, cloud company Oracle's data center debt is coming under scrutiny. And Merriam-Webster names the word of the year for 2025: slop.Marketplace's Meghan McCarty Carino spoke with Anita Ramaswamy, columnist at The Information, to learn more on this week's Marketplace Tech Bytes: Week in Review.
Jeetu Patel knows a few AI secrets. As the President of one of the largest companies in the world, he's helped pave the AI adoption roadmap. At Cisco, they provide full-stack, enterprise AI solutions spanning infrastructure, security, observability, and operations to the world's largest companies. So naturally, Jeetu could write a legit playbook on what's slowing enterprises down in the AI fast lane and how they can overcome those bottlenecks. And naturally, Jeetu is gonna share it all with us. The 3 Big Obstacles Holding AI Adoption Back -- An Everyday AI Chat with Cisco President Jeetu PatelNewsletter: Sign up for our free daily newsletterMore on this Episode:Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Enterprise AI Adoption Rates & ChallengesAI Workflow Automation Phase ExplainedThree Big Obstacles to AI AdoptionInfrastructure Constraints for Enterprise AITrust Deficit in AI SystemsData Gaps Impacting AI SuccessMeasuring ROI on Enterprise AI DeploymentFuture Trends: Agentic AI and Original InsightsTimestamps:00:00 AI Adoption Challenges in Enterprise05:18 AI Adaptation: The Key Strength08:56 AI Infrastructure and Trust Challenges10:23 Building Trust and Harnessing Data13:27 Unsatiated Demand Signals Growth19:12 Proactive AI Model Safeguards22:07 AI Strategy and Business Growth26:09 Key Metrics for AI Success28:10 Guardrails for AI Vulnerabilities31:34 AI Unlocking Revolutionary DiscoveriesKeywords:AI adoption, obstacles to AI adoption, enterprise AI, generative AI, AI strategies, chatbots, autonomous agents, workflow automation, business productivity automation, infrastructure for AI, AI power consumption, data center capacity, compute capacity, GPUs, Nvidia, AMD, network bandwidth, CapEx in AI, AI bubble, national security and AI, economic growth and AI, AI trust deficit, securing AI, AI safety, AI hallucinations, large language models, model unpredictability, AI guardrails, algorithmic jailbreak, AI security stack, AI defense, company data as moat, AI data pipeline, data gap in AI, machine data, human data, synthetic data, time series data, data correlation, AI model training, AI ROI, trust in AI systems, agentic workflows, future of AI, robotics, humanoid AI, physical AI, original insights with AI, economic prosperity with AI, AI-generated knowledge, workflow automation with AI agents, scaling AI in enterprisesSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Episode 93: A rumor and news episode to round out 2025. We chat a bit more about 9850X3D expectations, the current and future state of Intel CPUs following some 225F testing, Nvidia cutting GPU supply, potential new GPUs and Steve kills some hardware.CHAPTERS00:00 - Intro02:33 - More Thoughts on the 9850X3D06:13 - Where is Intel at With Their CPUs?24:42 - Nvidia Cutting GPU Supply?37:38 - AMD Launches Radeon RX 9060 XT LP43:22 - More Intel Arc B770 Rumors49:12 - Updates From Our Boring LivesSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.
Comment on this episode by going to KDramaChat.comToday, we'll be discussing Episode 5 of Start-Up, the hit K Drama on Netflix starring Bae Suzy as Seo Dal-mi, Nam Joo Hyuk as Nam Do-san, Kim Seon Ho as Han Ji Pyeong, Kang Han Na as Won In Jae, and Kim Hae Sook as Choi Won Deok. We discuss:The songs featured during the recap: "Running" by Gaho and "Shake Shake."The intense and emotional hackathon that tests our characters' ambition, determination, and self-worth.Seo Dal-mi's rising ambition and her impressive performance as the new CEO of Samsan Tech.Nam Do-san's growing confidence, his romantic development, and his beautiful metaphor involving Tarzan.The theme of imposter syndrome and how both Dal-mi and Do-san feel they're not worthy — but believe in each other.The critical role APIs, GPUs, data sets, and artificial neural networks play in tech — and how they're introduced in the show.Han Ji Pyeong's internal turmoil, guilt, and shift from dismissive investor to personal mentor and backer of Samsan Tech.The heartbreaking reveal that Dal-mi didn't go to college because she wanted to buy a corn dog truck for her grandmother.Dal-mi's smart and humble recruitment of Jeong Sa Ha, a designer with top-tier credentials, by literally going down on her knees.The competitive and cold dynamic between the sisters, especially in the brutal bathroom scene.The sly arrival of stylish twins to In Jae Company and the challenge they pose to Samsan Tech.Alex Kwon's savvy evaluation of Samsan Tech's potential, not just performance — and his pivotal vote that secures their place in Sandbox.The ethics and motivations behind Han Ji Pyeong's involvement in the letters, and Seo Dal-mi's growing suspicions.Our reflections on the character of Han Ji Pyeong and whether redemption is possible.The amazing career of Kang Han Na, the actress who plays Won In Jae, including her roles in Moon Lovers, Bon Appetit, and her stint as a top DJ for KBS.ReferencesKang Han Na on WikipediaGUI Steakhouse in New York CityData.gov, the home of the US Government's Open DataRunning by Gaho
Try OCI for free at http://oracle.com/eyeonai This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking. Join Modal, Skydance Animation, and today's innovative AI tech companies who upgraded to OCI…and saved. Why is AI moving from the cloud to our devices, and what makes on device intelligence finally practical at scale? In this episode of Eye on AI, host Craig Smith speaks with Christopher Bergey, Executive Vice President of Arm's Edge AI Business Unit, about how edge AI is reshaping computing across smartphones, PCs, wearables, cars, and everyday devices. We explore how ARM v9 enables AI inference at the edge, why heterogeneous computing across CPUs, GPUs, and NPUs matters, and how developers can balance performance, power, memory, and latency. Learn why memory bandwidth has become the biggest bottleneck for AI, how ARM approaches scalable matrix extensions, and what trade offs exist between accelerators and traditional CPU based AI workloads. You will also hear real world examples of edge AI in action, from smart cameras and hearing aids to XR devices, robotics, and in car systems. The conversation looks ahead to a future where intelligence is embedded into everything you use, where AI becomes the default interface, and why reliable, low latency, on device AI is essential for creating experiences users actually trust. Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI
En la última tertulia del año traemos Carles Reina (ElevenLabs) como responsable de go-to-market y, además, como inversor: comenta que ha hecho decenas de “tickets” y que acaba de levantar su propio fondo, Baobab Ventures.A partir de ahí, la conversación entra fuerte en el negocio de ElevenLabs: explican que construyen modelos de voz “naturales” y operan en ~70 idiomas, y que encima han montado productos de agentes, doblaje, transcripción, etc. Carles da cifras muy concretas de tracción (alrededor de 400 personas y más de 300M de facturación, alcanzados “hace unas semanas”) y describe un motor enterprise muy agresivo (hablan de 150–170 contratos al mes y de un día especialmente “loco” superando 14M en enterprise). También hablan de por qué siguen levantando rondas aun generando caja: señal al mercado, liquidez para empleados vía secundario, y capacidad de invertir/comprar (incluidas GPUs). En ese bloque bromean bastante con los múltiplos (el “33x” como estándar) y con la posible burbuja en el sector. Luego hablan del tema más “cultural” y polémico: el doblaje y los derechos de voz. Sale Masumi un actor de doblaje conocido en España (que es la voz de Harry Potter y Anakin, y su vínculo con el sindicato) y se discute la línea roja de “no entrenar modelos con nuestras voces” frente a usos consentidos. Carles cuenta casos prácticos en Hollywood donde actores ceden permiso para usar su propia voz en postproducción cuando no pueden grabar, y aparece la idea que vertebra todo el debate: poder ver una peli con “la misma voz del actor” en otro idioma (por ejemplo, el ideal de oír a la misma actriz hablando en catalán o castellano sin perder identidad), frente a la realidad del consumo en España (acostumbrados al doblaje) y la alternativa de VO con subtítulos. En “actualidad/noticias”, el bloque más largo gira alrededor de una supuesta ola de consolidación en streaming y medios: comentan una operación de Netflix con Warner (centrada en activos digitales tipo HBO/HBO Max) y, como contrapeso, una oferta de Paramount por “todo” (y el lío político/regulatorio alrededor del antitrust). Ahí meten nombres y contexto político: hablan de Donald Trump opinando públicamente, de la familia Ellison (Oracle) detrás de Paramount, de tensiones por contenidos/editoriales, y de cómo eso mueve preferencias y narrativas; incluso lo cruzan con TikTok como parte del “ruido” de esos días. A nivel de análisis, lo conectan con el choque entre “calidad premium” (HBO/Warner) y “volumen/variedad algorítmica” (Netflix/TikTok) y con el riesgo de que la consolidación reduzca competencia y, por tanto, incentive menos calidad. También mencionan otras “noticias” tech/IA del momento dentro de la tertulia: preguntan por un anuncio de OpenAI con Disney y si cambia algo en la relación (Carles dice que no), y en otro punto comentan como titular que Amazon “por fin” habría invertido fuerte en OpenAI y lo enlazan con la guerra de infraestructura (chips/TPUs vs Nvidia y el rol del cloud). Por último, se abre el foco a inversión y mercado: cuentan que ElevenLabs tiene un “venture” y que invierten desde balance, y aparecen conversaciones típicas de ciclo: comparan múltiplos (Databricks vs Snowflake), especulan con una posible “edad dorada” de salidas a bolsa y, ya en tono de anécdota, comentan que “hoy” alguien anunció una ronda de 200M a 6B (sin entrar demasiado en detalles, pero usándolo como termómetro del hype). Sigue a los "tertulianos" en Twitter:• Bernat Farrero: @bernatfarrero• Jordi Romero: @jordiromero• César Migueláñez: @heycesrSOBRE ITNIG
GPUs dominate today's AI landscape, but Google argues they are not necessary for every workload. As AI adoption has grown, customers have increasingly demanded compute options that deliver high performance with lower cost and power consumption. Drawing on its long history of custom silicon, Google introduced Axion CPUs in 2024 to meet needs for massive scale, flexibility, and general-purpose computing alongside AI workloads. The Axion-based C4A instance is generally available, while the newer N4A virtual machines promise up to 2x price performance.In this episode, Andrei Gueletii, a technical solutions consultant for Google Cloud joined Gari Singh, a product manager for Google Kubernetes Engine (GKE), and Pranay Bakre, a principal solutions engineer at Arm for this episode, recorded at KubeCon + CloudNativeCon North America, in Atlanta. Built on Arm Neoverse V2 cores, Axion processors emphasize energy efficiency and customization, including flexible machine shapes that let users tailor memory and CPU resources. These features are particularly valuable for platform engineering teams, which must optimize centralized infrastructure for cost, FinOps goals, and price performance as they scale.Importantly, many AI tasks—such as inference for smaller models or batch-oriented jobs—do not require GPUs. CPUs can be more efficient when GPU memory is underutilized or latency demands are low. By decoupling workloads and choosing the right compute for each task, organizations can significantly reduce AI compute costs.Learn more from The New Stack about the Axion-based C4A: Beyond Speed: Why Your Next App Must Be Multi-ArchitectureArm: See a Demo About Migrating a x86-Based App to ARM64Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In our latest episode, our co-hosts Robby and Tim talk with Jon Morehouse, founder and CEO of infrastructure company Nuon which enables Bring Your Own Cloud (BYOC) for everyone. This is an exclusive podcast episode with Jon digging into their decision to open source Nuon! The episode discusses the industry's growing shift toward Bring Your Own Cloud (BYOC), where SaaS products run directly inside a customer's cloud account rather than the vendor's. This model is especially attractive to enterprises because it improves security, data sovereignty, and trust, while enabling earlier pilots and shorter sales cycles. Infrastructure products like Nuon focus on making this practical by packaging applications so they work in customer environments without requiring vendor access, positioning BYOC as an enterprise-first approach that is likely to become the default way software is delivered.A key theme is open source as a trust and distribution strategy. In the infrastructure space, open sourcing lowers perceived risk, deepens customer collaboration, and builds community, which in turn acts as sales enablement for large enterprise deals. The conversation also connects BYOC to AI, highlighting patterns like bring-your-own-model, keys, and GPUs, and frames BYOC as a spectrum rather than a binary choice. The broader vision is to define and lead a BYOC movement by uniting vendors around shared standards, trust, and community-driven adoption.
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. Why is AI so powerful in the cloud but still so limited inside everyday devices, and what would it take to run intelligent systems locally without draining battery or sacrificing privacy? In this episode of Eye on AI, host Craig Smith speaks with Steve Brightfield, Chief Marketing Officer at BrainChip, about neuromorphic computing and why brain inspired architectures may be the key to the future of edge AI. We explore how neuromorphic systems differ from traditional GPU based AI, why event driven and spiking neural networks are dramatically more power efficient, and how on device inference enables faster response times, lower costs, and stronger data privacy. Steve explains why brute force computation works in data centers but breaks down at the edge, and how edge AI is reshaping wearables, sensors, robotics, hearing aids, and autonomous systems. You will also hear real world examples of neuromorphic AI in action, from smart glasses and medical monitoring to radar, defense, and space applications. The conversation covers how developers can transition from conventional models to neuromorphic architectures, what role heterogeneous computing plays alongside CPUs and GPUs, and why the next wave of AI adoption will happen quietly inside the devices we use every day. Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI
In this episode of Hands-On IT, Landon Miles explores the history of servers and enterprise IT infrastructure, from early mainframe computers to cloud computing, Linux servers, virtualization, containers, and AI-driven data centers.This episode connects decades of server evolution into a clear, accessible story, focusing on the people, technologies, and ideas that shaped modern computing. From IBM's System/360 and minicomputers, to Unix and Linux, virtualization, cloud platforms like AWS and Azure, and container orchestration with Docker and Kubernetes, this episode explains how servers became the foundation of today's digital world.Topics covered include: • Server history and early computing systems • IBM mainframes and enterprise computing • Minicomputers and distributed computing • Unix, Linux, and open-source software • Virtualization and data center efficiency • Cloud computing and hyperscale infrastructure • Docker, Kubernetes, and cloud-native architecture • AI workloads, GPUs, and modern server hardwareLandon also highlights key figures in computing history, including Grace Hopper, Ken Olsen, Linus Torvalds, Dave Cutler, Diane Greene, and Jeff Bezos, and explains how their work still influences IT operations today.This episode is part of our December Best Of series, featuring some of our favorite moments and episodes from the past year.Originally aired March 20, 2025.
Most companies say they are “doing AI.” Very few are actually ready for it. In my new episode of The Ravit Show, I sat down with Simon Miceli, Managing Director, Cisco, who leads Cloud and AI Infrastructure across Asia Pacific, Japan, and Greater China. He sits right where big AI ambitions meet the hard reality of networks, security, and technical debt.This conversation builds on my earlier episode with Jeetu Patel, President and CPO Cisco and goes deeper into what it really takes to get AI working in production in APJC.Here are a few themes we unpacked:-- Only a small group is truly AI ready- Cisco's latest AI Readiness Index shows that just a small percentage of organizations globally are able to turn AI into real business value. Cisco calls them “Pacesetters.”- They are not just running pilots. They have use cases in production and are seeing returns.-- We are entering the agentic phase of AI- Simon talked about how we are moving from simple chatbots to AI agents that can take action.- That shift changes everything for infrastructure.- Instead of short bursts of activity, you now have systems that are always working in the background, automating processes and touching real operations.-- AI infrastructure debt is the new technical debt*- Many organizations are carrying years of compromises in their networks, data centers, and security.- Simon called this “AI infrastructure debt” and described how it quietly slows down innovation, increases costs, and makes it harder to scale AI safely.-- Network as a foundation, not an afterthought- One of his strongest points: leaders often think first about compute, but the companies that are ahead treat the network as the base layer for AI.- When workloads double, your network can become the bottleneck before your GPUs do. - The Pacesetters are already investing to make their networks “AI ready” and integrating AI deeply with both network and cloud.Three things leaders must fix in the next 2–3 yearsSimon shared a very clear checklist for CIOs and business leaders who are serious about agentic AI: 1. Solve for power before it becomes a constraint 2. Treat deployment as day one and keep optimizing models after they go live 3. Build security into the infrastructure from the start so it accelerates innovation instead of blocking itThis was a very honest, no-nonsense view of where APJC really stands on AI, and what the leading organizations are doing differently!!!!Thank you Simon for joining me and sharing how Cisco is thinking about AI infrastructure across the region.#data #ai #cisco #CiscoLiveAPJC #Sponsored #CiscoPartner #TheRavitShow
In this episode of The Dutch Kubernetes Podcast, Ronald and Jan talk with Andrea Giardini, cloud native consultant and trainer, live from Dutch Cloud Native Day. Andrea shares his journey into cloud and Kubernetes and dives deep into a real-world use case where Kubernetes, data engineering, and AI are used to help prevent wildfires.Andrea explains how his client Overstory uses satellite and aerial imagery to monitor vegetation near power lines. By combining geospatial data, machine learning models, and infrastructure data from energy providers, they can calculate risk profiles and alert operators before vegetation causes sparks or fires. Instead of reacting to disasters, the platform focuses on prevention.From a technical perspective, Kubernetes plays a critical role. The workloads vary massively, ranging from small CPU-based tasks to extremely heavy jobs requiring dozens of CPUs, large amounts of memory, or GPUs. Kubernetes provides the flexibility to dynamically scale these workloads, spin resources up and down when needed, and keep costs under control.The conversation also covers the data engineering workflow. JupyterHub is used extensively for data exploration, but Andrea explains why notebooks alone are not reliable for long-term, repeatable processing. Once experiments are validated, workflows are moved into reproducible Python pipelines using a cloud-native workflow orchestrator (Dagster), fully integrated with Kubernetes.They further discuss handling large datasets in object storage, running different pipeline steps with different resource profiles, GPU scheduling, and improving developer experience with pull-request-based preview environments. The episode highlights how cloud native technologies are not just about infrastructure efficiency, but can have real-world impact on safety, sustainability, and climate-related challenges.Stuur ons een bericht.ACC ICT Specialist in IT-CONTINUÏTEIT Bedrijfskritische applicaties én data veilig beschikbaar, onafhankelijk van derden, altijd en overalSupport the showLike and subscribe! It helps out a lot.You can also find us on:De Nederlandse Kubernetes Podcast - YouTubeNederlandse Kubernetes Podcast (@k8spodcast.nl) | TikTokDe Nederlandse Kubernetes PodcastWhere can you meet us:EventsThis Podcast is powered by:ACC ICT - IT-Continuïteit voor Bedrijfskritische Applicaties | ACC ICT
In this episode of NatChat, the Natilik Multi-Cloud team dives deep into the evolving world of AI and its impact on businesses. Host Nicholas Diesel, Solutions Architect is joined by Nigel Pyne, Principal Architect for Multi-Cloud, and Jordan Checker, Multi-Cloud & AI Sales Specialist, to unpack the trends that shaped 2025.The discussion covers:AI Standardisation: How blueprints for client-driven AI solutions, including Retrieval-Augmented Generation (RAG) and LoRA fine-tuning, are becoming the norm.Infrastructure Decisions: When to invest in data centre, edge, or cloud for AI workloads, and how to balance GPUs, CPUs, and RAM without overspending.Use Case & ROI: Why identifying the right AI use cases and proving business value is critical before scaling.Governance & Data Sovereignty: The growing importance of compliance, security, and regional data policies in AI deployments.Natilik's Role: How Natilik supports clients with flexible platform choices; public cloud, on-prem solutions, and its own AI-ready Natilik platform offering private GPU as-a-Service.The episode also explores future trends like agentic AI and the data challenges it will bring, plus practical advice for starting small and scaling smart.Want to learn more or start your AI journey? Reach out to Natilik via natilik.com or connect with Jordan directly at jchecker@natilik.com
In this week's Stansberry Investor Hour, Dan and Corey welcome Luke Lango to the show. Luke is the senior investment analyst at our corporate affiliate InvestorPlace. He has built a reputation for spotting tech stocks on the verge of major market breakouts. Luke kicks things off by sharing his thoughts on what many consider to be the current "AI bubble." He follows that up with how the jobs market is going to transition as AI continues to develop and how the economy will fare during that period. And he provides data for how the AI data-center epicenter has impacted the housing market. (0:00) Next, Luke discusses the shift from companies using graphics processing units ("GPUs") to tensor processing units ("TPUs") for their data centers and why this is taking place. He then gives his thoughts on whether Intel can become a viable competitor again in this market. And he highlights the risks around the AI companies being interconnected and feeding into each other. (18:53) Finally, Luke expresses why he's pleased that Alphabet has begun to act as a competitor to Nvidia with its own TPUs. He also covers AI being used in ads and how companies like Meta Platforms have seen success with utilizing it in that area. The three all share how they're all using AI in their personal use cases. And Luke gives his thoughts on what the big investment themes are going to be for 2026. (39:01)
- Nvidia H200 exports to China - H20, H200, Chinese chips: how do they stack up? - Few fast GPUs vs many slow GPUs - China's electricity production - Datacenter electricity use in the US - Cell-phone sized AI supercomputer - HPC at the edge - Regulating AI [audio mp3="https://orionx.net/wp-content/uploads/2025/12/HPCNB_20251215.mp3"][/audio] The post HPC News Bytes – 20251215 appeared first on OrionX.net.
In this week's Stansberry Investor Hour, Dan and Corey welcome Luke Lango to the show. Luke is the senior investment analyst at our corporate affiliate InvestorPlace. He has built a reputation for spotting tech stocks on the verge of major market breakouts. Luke kicks things off by sharing his thoughts on what many consider to be the current "AI bubble." He follows that up with how the jobs market is going to transition as AI continues to develop and how the economy will fare during that period. And he provides data for how the AI data-center epicenter has impacted the housing market. (0:00) Next, Luke discusses the shift from companies using graphics processing units ("GPUs") to tensor processing units ("TPUs") for their data centers and why this is taking place. He then gives his thoughts on whether Intel can become a viable competitor again in this market. And he highlights the risks around the AI companies being interconnected and feeding into each other. (18:53) Finally, Luke expresses why he's pleased that Alphabet has begun to act as a competitor to Nvidia with its own TPUs. He also covers AI being used in ads and how companies like Meta Platforms have seen success with utilizing it in that area. The three all share how they're all using AI in their personal use cases. And Luke gives his thoughts on what the big investment themes are going to be for 2026. (39:01)
This Week In Startups is made possible by:LinkedIn Ads - http://linkedin.com/thisweekinstartupsDevStats - https://www.devstats.com/twistCrusoe - https://crusoe.ai/buildToday's show: FINALLY, you can hang out with Kylo Ren and Olaf the Snowman… thanks to the magic of AI.On TWiST, we're digging into the mega OpenAI-Disney deal. Mickey is giving Sam Altman a $1 billion investment AND will allow is copyrighted characters to appear in Sora and ChatGPT images.Of course, Jason predicted this would happen WAY BACK during the summer months and even showed off his “Darth Calacanis” creation on the “All-In Podcast.”PLUS Amazon has been launching and pulling AI features from Prime Video… what gives? Jason's predictions on the coming AI blowback and who's on what side. Why he's so focused on Education, Health Care, and Housing as issues. AND why founders should always take calls from Big Companies, even if it might just be a fishing expedition.It's a new Friday TWiST! Check it out!Timestamps:(00:00) Lon joins Alex and Jason to talk about the big Disney-OpenAI deal bringing Disney characters to Sora(03:10) Jason totally called the Disney-OpenAI stuff on All-In(9:42) LinkedIn Ads: Start converting your B2B audience into high quality leads today. Launch your first campaign and get $250 FREE when you spend at least $250. Go to http://linkedin.com/thisweekinstartups to claim your credit.(18:59) DevStats - DevStats integrates your dev work and your business goals into a shared language that everyone can understand. Get 20% off, plus access to their dedicated Slack channel. Just go to https://www.devstats.com/twist.(20:15) Why Amazon Prime Video pulled its AI recaps and anime dubs(24:44) Who gets to set the rules around AI: The Debate Continues(26:13) Jason's predictions on the AI blowback coming in 2026… with clips!(30:11) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(31:21) Is AI here to help people or replace them?(35:55) It's all about EHH: Education, Health Care, Housing(40:47) How all of this and MORE will be impacted directly by AI automation(45:35) Why Alex wants to lower the temperature around AI Doomerism(51:19) JUST FOR FOUNDERS: When should you take a call from a BigCo?(53:45) Why Jason thinks just about everyone in media will lose to TikTok and YouTubeSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.com/Check out the TWIST500: https://twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcp*Follow Lon:X: https://x.com/lons*Follow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelm/*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis/*Thank you to our partners:(9:42) LinkedIn Ads: Start converting your B2B audience into high quality leads today. Launch your first campaign and get $250 FREE when you spend at least $250. Go to http://linkedin.com/thisweekinstartups to claim your credit.(18:59) DevStats - DevStats integrates your dev work and your business goals into a shared language that everyone can understand. Get 20% off, plus access to their dedicated Slack channel. Just go to https://www.devstats.com/twist.(30:11) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.
This week in bitcoin mining news, ERCOT sees a 266 GW of interconnection requests in 2026, IREN closed a $2.3 billion convertible note offering, and GPUs are leaving ASICs in the dust. Subscribe to the Blockspace newsletter for market-making news as it hits the wire! Welcome back to The Mining Pod! Today, Ethan Vera, COO of Luxor, joins us as we dive into MicroBT's Whatsminer M70 launching into a challenging ASIC market, IREN's $2.3 billion convertible note offering, the precarious state of hashprice, Luxor's new GPU hardware sales business, the staggering 270% leap in ERCOT interconnection requests, and the controversial Cat bitcoin fork proposal aimed at filtering ordinals / inscriptions. Subscribe to the newsletter! https://newsletter.blockspacemedia.com **Notes:** - Hash price is below $40 per second - Three negative difficulty adjustments - Ercot requests leaped 270% in 2025 - 73% of requests from data centers - IREN raised $2.3B in convertible notes - M70 efficiency: 12.5 J/TH 00:00 Start 02:35 Difficulty Report by Luxor 07:26 IREN note 10:44 M70 launch 20:02 Luxor launches GPU trading 27:12 ERCOT LL requests up 270% in 2025 34:10 Cry Corner: another filter fork proposal
Episode 92: Edward Crisler from Radeon-exclusive AIB Sapphire joins the podcast to chat about the current GPU market. How will rising DRAM prices affect gaming GPUs? Can the GPU makers and AIBs absorb some of the increased cost? Also we talk about RDNA 4 and how successful it's been compared to previous generations, AMD's true market share, and of course, the Sapphire Puke box artCHAPTERS00:00 - Intro01:03 - RDNA 4 Launch at Sapphire05:11 - RDNA 4 vs Older Generations Success11:32 - The DRAM Crisis20:25 - AIBs Want More Control24:48 - Thoughts on 12VHPWR26:32 - How Are SKU Decisions Made?32:35 - Sapphire Puke35:27 - DRAM Pricing: What Can AMD and AIBs Do?44:50 - AI-Focused GPU Makers Owe Everything to Gamers50:56 - AMD's True Market Share59:05 - The Key to RDNA 4's Success1:03:13 - Outro with Ed's Favorite Sapphire GenerationSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.
AI isn't running out of GPUs… it's running out of power. Today on Dumb Money, the overlooked AI energy supplier that could be one of the most misunderstood stocks in the market.
Subscribe to the Blockspace newsletter for market-making news as it hits the wire! Welcome back to The Mining Pod! Today, Ethan Vera, COO of Luxor, joins us as we dive into MicroBT's Whatsminer M70 launching into a challenging ASIC market, IREN's $2.3 billion convertible note offering, the precarious state of hashprice, Luxor's new GPU hardware sales business, the staggering 270% leap in ERCOT interconnection requests, and the controversial Cat bitcoin fork proposal aimed at filtering ordinals / inscriptions. Subscribe to the newsletter! https://newsletter.blockspacemedia.com **Notes:** - Hash price is below $40 per second - Three negative difficulty adjustments - Ercot requests leaped 270% in 2025 - 73% of requests from data centers - IREN raised $2.3B in convertible notes - M70 efficiency: 12.5 J/TH 00:00 Start 02:35 Difficulty Report by Luxor 07:26 IREN note 10:44 M70 launch 20:02 Luxor launches GPU trading 27:12 ERCOT LL requests up 270% in 2025 34:10 Cry Corner: another filter fork proposal
AI demand for GPUs is exploding – and most of that capacity is locked inside underused data centers.In this episode, I talk with Mark from Aethir, a decentralized GPU cloud that aggregates idle, enterprise-grade GPUs into a global network. We discuss how Aethir feels like AWS on the front end but works like “Airbnb for data centers” behind the scenes, why compute demand outpaces supply, and how they keep latency low across 90+ countries.Mark also explains Aethir's token and revenue model, their work with EigenLayer, and why he believes solo founders now have superpowers in an AI-native world.Nothing in this episode is financial or investment advice.Key timestamps[00:00:00] Intro: Sam introduces Mark and Aethir's decentralized GPU cloud.[00:01:00] Mark's journey: From oil and gas infra and biotech to building GPU infrastructure for AI.[00:04:00] What Aethir is: AWS-style GPU cloud on the front end, “Airbnb for data centers” on the back end.[00:06:00] Enterprise-only GPUs: Why they only use data-center-grade hardware and no consumer devices.[00:07:00] Exploding demand: GPU demand 6–8x supply, with inference-heavy apps driving the next wave.[00:14:00] Global coverage: 90+ countries and routing users to nearby nodes for low latency.[00:31:00] Business model: 20% protocol fee, 80% to GPU hosts, plus token rewards and staking for large clusters.[00:39:00] Solo founder era: Why one-person AI-native companies will be extremely powerful.[00:41:00] Mark's message: Focus on projects with strong fundamentals and keep building through cycles.Connecthttp://aethir.com/https://www.linkedin.com/company/aethir-limited/https://x.com/AethirCloudhttps://www.linkedin.com/in/markrydon/https://x.com/MRRydonDisclaimerNothing mentioned in this podcast is investment advice and please do your own research. It would mean a lot if you can leave a review of this podcast on Apple Podcasts or Spotify and share this podcast with a friend.Get featuredBe a guest on the podcast or contact us – https://www.web3pod.xyz/
In the race to train and deploy generative AI models, companies have poured hundreds of billions of dollars into GPUs, chips that have become essential for the parallel processing needs of large language models.Nvidia alone has forecast $500 billion in sales across 2025 and 2026, driven largely by Jensen Huang, founder and CEO at Nvidia, recently stated that “inference has become the most compute-intensive phase of AI — demanding real-time reasoning at planetary scale”. Google is meeting these demands in its own way. Unlike other firms reliant on chips by Nvidia, AMD, and others, Google has long used its in-house ‘tensor processing units' (TPUs) for AI training and inference.What are the benefits and drawbacks of Google's reliance on TPUs? And how do its chips stack up against the competition?In this episode, Jane and Rory discuss TPUs – Google's specialized processors for AI and ML – and how they could help the hyperscaler outcompete its rivals.Read more:
Nvidia Distinguished Engineer Kevin Klues noted that low-level systems work is invisible when done well and highly visible when it fails — a dynamic that frames current Kubernetes innovations for AI. At KubeCon + CloudNativeCon North America 2025, Klues and AWS product manager Jesse Butler discussed two emerging capabilities: dynamic resource allocation (DRA) and a new workload abstraction designed for sophisticated AI scheduling.DRA, now generally available in Kubernetes 1.34, fixes long-standing limitations in GPU requests. Instead of simply asking for a number of GPUs, users can specify types and configurations. Modeled after persistent volumes, DRA allows any specialized hardware to be exposed through standardized interfaces, enabling vendors to deliver custom device drivers cleanly. Butler called it one of the most elegant designs in Kubernetes.Yet complex AI workloads require more coordination. A forthcoming workload abstraction, debuting in Kubernetes 1.35, will let users define pod groups with strict scheduling and topology rules — ensuring multi-node jobs start fully or not at all. Klues emphasized that this abstraction will shape Kubernetes' AI trajectory for the next decade and encouraged community involvement.Learn more from The New Stack about dynamic resource allocation: Kubernetes Primer: Dynamic Resource Allocation (DRA) for GPU WorkloadsKubernetes v1.34 Introduces Benefits but Also New Blind SpotsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
A inteligência artificial, em seus múltiplos sentidos, tem dominado a agenda pública e até mesmo o direcionamento do capital das grandes empresas de tecnologia. Mas você já parou para pensar na infraestrutura gigantesca que dê conta de sustentar o crescimento acelerado das IAs? O futuro e o presente da inteligência artificial passa pela existência dos datacenters. E agora é mais urgente que nunca a gente discutir esse assunto. Estamos vendo um movimento se concretizar, que parece mais uma forma de colonialismo digital: com a crescente resistência à construção de datacenters nos países no norte global, empresas e governos parecem estar convencidos a trazer essas infraestruturas imensas com todos os seus impactos negativos ao sul global. Nesse episódio Yama Chiodi e Damny Laya conversam com pesquisadores, ativistas e atingidos para tentar aprofundar o debate sobre a infraestrutura material das IAs. A gente conversa sobre o que são datacenters e como eles impactam e irão impactar nossas vidas. No segundo episódio, recuperamos movimentos de resistência a sua instalação no Brasil e como nosso país se insere no debate, seguindo a perspectiva de ativistas e de pesquisadores da área que estão buscando uma regulação mais justa para esses grandes empreendimentos. ______________________________________________________________________________________________ ROTEIRO [ vinheta da série ] [ Começa bio-unit ] YAMA: A inteligência artificial, em seus múltiplos sentidos, tem dominado a agenda pública e até mesmo o direcionamento do capital das grandes empresas de tecnologia. Mas você já parou para pensar na infraestrutura gigantesca que dê conta de sustentar o crescimento acelerado das IA? DAMNY: O futuro e o presente da inteligência artificial passa pela existência dos data centers. E agora é mais urgente que nunca a gente discutir esse assunto. Estamos vendo um movimento se concretizar, que parece mais uma forma de colonialismo digital: com a crescente resistência à construção de datacenters nos países no norte global, empresas e governos parecem estar convencidos a trazer os datacenters com todos os seus impactos negativos ao sul global. YAMA: Nós conversamos com pesquisadores, ativistas e atingidos e em dois episódios nós vamos tentar aprofundar o debate sobre a infraestrutura material das IAs. No primeiro, a gente conversa sobre o que são datacenters e como eles impactam e irão impactar nossas vidas. DAMNY: No segundo, recuperamos movimentos de resistência a sua instalação no Brasil e como nosso país se insere no debate, seguindo a perspectiva de ativistas e de pesquisadores da área que estão buscando uma regulação mais justa para esses grandes empreendimentos. [ tom baixo ] YAMA: Eu sou o Yama Chiodi, jornalista de ciência e pesquisador do campo das mudanças climáticas. Se você já é ouvinte do oxigênio pode ter me ouvido aqui na série cidade de ferro ou no episódio sobre antropoceno. Ao longo dos últimos meses investiguei os impactos ambientais das inteligências artificiais para um projeto comum entre o LABMEM, o laboratório de mudança tecnológica, energia e meio ambiente, e o oxigênio. Em setembro passado, o Damny se juntou a mim pra gente construir esses episódios juntos. E não por acaso. O Damny publicou em outubro passado um relatório sobre os impactos socioambientais dos data centers no Brasil, intitulado “Não somos quintal de data center”. O link para o relatório completo se encontra disponível na descrição do episódio. Bem-vindo ao Oxigênio, Dam. DAMNY: Oi Yama. Obrigado pelo convite pra construir junto esses episódios. YAMA: É um prazer, meu amigo. DAMNY: Eu também atuo como jornalista de ciência e sou pesquisador de governança da internet já há algum tempo. Estou agora trabalhando como jornalista e pesquisador aqui no LABJOR, mas quando escrevi o relatório eu tava trabalhando como pesquisador-consultor na ONG IDEC, Instituto de Defesa de Consumidores. YAMA: A gente começa depois da vinheta. [ Termina Bio Unit] [ Vinheta Oxigênio ] [ Começa Documentary] YAMA: Você já deve ter ouvido na cobertura midiática sobre datacenters a formulação que te diz quantos litros de água cada pergunta ao chatGPT gasta. Mas a gente aqui não gosta muito dessa abordagem. Entre outros motivos, porque ela reduz o problema dos impactos socioambientais das IA a uma questão de consumo individual. E isso é um erro tanto político como factual. Calcular quanta água gasta cada pergunta feita ao ChatGPT tira a responsabilidade das empresas e a transfere aos usuários, escondendo a verdadeira escala do problema. Mesmo que o consumo individual cresça de modo acelerado e explosivo, ele sempre vai ser uma pequena fração do problema. Data centers operam em escala industrial, computando quantidades incríveis de dados para treinar modelos e outros serviços corporativos. Um único empreendimento pode consumir em um dia mais energia do que as cidades que os abrigam consomem ao longo de um mês. DAMNY: Nos habituamos a imaginar a inteligência artificial como uma “nuvem” etérea, mas, na verdade, ela só existe a partir de data centers monstruosos que consomem quantidades absurdas de recursos naturais. Os impactos sociais e ambientais são severos. Data centers são máquinas de consumo de energia, água e terra, e criam poluição do ar e sonora, num modelo que reforça velhos padrões de racismo ambiental. O desenvolvimento dessas infraestruturas frequentemente acontece à margem das comunidades afetadas, refazendo a cartilha global da injustiça ambiental. Ao seguir suas redes, perceberemos seus impactos em rios, no solo, no ar, em territórios indígenas e no crescente aumento da demanda por minerais críticos e, por consequência, de práticas minerárias profundamente destrutivas. YAMA: De acordo com a pesquisadora Tamara Kneese, diretora do programa de Clima, Tecnologia e Justiça do instituto de pesquisa Data & Society, com quem conversamos, essa infraestrutura está criando uma nova forma de colonialismo tecnológico. Os danos ambientais são frequentemente direcionados para as comunidades mais vulneráveis, de zonas rurais às periferias dos grandes centros urbanos, que se tornam zonas de sacrifício para o progresso dessa indústria. DAMNY: Além disso, a crescente insatisfação das comunidades do Norte Global com os data centers tem provocado o efeito colonial de uma terceirização dessas estruturas para o Sul Global. E o Brasil não apenas não é exceção como parece ser um destino preferencial por sua alta oferta de energia limpa. [pausa] E com o aval do governo federal, que acaba de publicar uma medida provisória chamada REDATA, cujo objetivo é atrair data centers ao Brasil com isenção fiscal e pouquíssimas responsabilidades. [ Termina Documentary] [tom baixo ] VOICE OVER: BLOCO 1 – O QUE SÃO DATA CENTERS? YAMA: Pra entender o que são data centers, a gente precisa antes de tudo de entender que a inteligência artificial não é meramente uma nuvem etérea que só existe virtualmente. Foi assim que a gente começou nossa conversa com a pesquisadora estadunidense Tamara Kneese. Ela é diretora do programa de Clima, Tecnologia e Justiça do instituto de pesquisa Data & Society. TAMARA: PT – BR [ Eu acho que o problema da nossa relação com a computação é que a maioria parte do tempo a gente não pensa muito sobre a materialidade dos sistemas informacionais e na cadeia de suprimentos que permitem que eles existam. Tudo que a gente faz online não depende só dos nossos aparelhos, ou dos serviços de nuvem que a gente contrata, mas de uma cadeia muito maior. De onde ver o hardware que a gente usa? Que práticas de trabalho são empregadas nessa cadeia? E então, voltando à cadeia de suprimentos, pensar sobre os materiais brutos e os minerais críticos e outras formas de extração, abusos de direitos humanos e trabalhistas que estão diretamente relacionados à produção dos materiais que precisamos pra computação em geral. ] So I think, you know, the problem with our relationship to computing is that, most of the time, we don’t really think that much about the materiality of the computing system and the larger supply chain. You know, thinking about the fact that, of course, everything we do relies not just on our own device, or the particular cloud services that we subscribe to, but also on a much larger supply chain. So, where does the hardware come from, that we are using, and what kind of labor practices are going into that? And then be, you know, further back in the supply chain, thinking about raw materials and critical minerals and other forms of extraction, and human rights abuses and labor abuses that also go into the production of the raw materials that we need for computing in general. DAMNY: A Tamara já escreveu bastante sobre como a metáfora da nuvem nos engana, porque ela dificulta que a gente enxergue a cadeia completa que envolve o processamento de tantos dados. E isso se tornou uma questão muito maior com a criação dos chatbots e das IAs generativas. YAMA: Se a pandemia já representou uma virada no aumento da necessidade de processamento de dados, quando passamos a ir à escola e ao trabalho pelo computador, o boom das IA generativas criou um aumento sem precedentes da necessidade de expandir essas cadeias. DAMNY: E na ponta da infraestrutura de todas as nuvens estão os data centers. Mais do que gerar enormes impactos sócio-ambientais, eles são as melhores formas de enxergar que o ritmo atual da expansão das IAs não poderá continuar por muito tempo, por limitações físicas. Não há terra nem recursos naturais que deem conta disso. YAMA: A gente conversou com a Cynthia Picolo, que é Diretora Executiva do LAPIN, o Laboratório de Políticas Públicas e Internet. O LAPIN tem atuado muito contra a violação de direitos na implementação de data centers no Brasil e a gente ainda vai conversar mais sobre isso. DAMNY: Uma das coisas que a Cynthia nos ajudou a entender é como não podemos dissociar as IAs dos data centers. CYNTHIA: Existe uma materialidade por trás. Existe uma infraestrutura física, que são os data centers. Então os data centers são essas grandes estruturas que são capazes de armazenar, processar e transferir esses dados, que são os dados que são os processamentos que vão fazer com que a inteligência artificial possa acontecer, possa se desenvolver, então não existe sem o outro. Então falar de IA é falar de Datacenter. Então não tem como desassociar. YAMA: Mas como é um datacenter? A Tamara descreve o que podemos ver em fotos e vídeos na internet. TAMARA: [ Sim, de modo geral, podemos dizer que os data centers são galpões gigantes de chips, servidores, sistemas em redes e quando você olha pra eles, são todos muitos parecidos, prédios quadrados sem nada muito interessante. Talvez você nem saiba que é um data center se não observar as luzes e perceber que é uma estrutura enorme sem pessoas, sem trabalhadores. ] Yeah, so, you know, essentially, they’re like giant warehouses of chips, of servers, of networked systems, and, you know, they look like basically nondescript square buildings, very similar. And you wouldn’t really know that it’s a data center unless you look at the lighting, and you kind of realize that something… like, it’s not inhabited by people or workers, really. DAMNY: No próximo bloco a gente tenta resumir os principais problemas socioambientais que os data centers já causam e irão causar com muita mais intensidade no futuro. [tom baixo ] VOICE OVER: BLOCO 2 – A ENORME LISTA DE PROBLEMAS YAMA: O consumo de energia é provavelmente o problema mais conhecido dos data centers e das IAs. Segundo dados da Agência Internacional de Energia, a IEA, organização internacional da qual o Brasil faz parte, a estimativa para o ano de 2024 é que os data centers consumiram cerca de 415 TWh. A cargo de comparação, segundo a Empresa de Pesquisa Energética, instituto de pesquisa público associado ao Ministério das Minas e Energia, o Brasil consumiu no ano de 2024 cerca de 600 TWh. DAMNY: Segundo o mesmo relatório da Agência Internacional de Energia, a estimativa é que o consumo de energia elétrica por datacenters em 2030 vai ser de pelo menos 945 TWh, o que representaria 3% de todo consumo global projetado. Quando a gente olha pras estimativas de outras fontes, contudo, podemos dizer que essas são projeções até conservadoras. Especialmente considerando o impacto da popularização das chamadas LLM, ou grandes modelos de linguagem – aqueles YAMA: Ou seja, mesmo com projeções conservadoras, os data centers do mundo consumiriam em 2030, daqui a menos de cinco anos, cerca de 50% a mais de energia que o Brasil inteiro consome hoje. Segundo a IEA, em 2030 o consumo global de energia elétrica por data centers deve ser equivalente ao consumo da Índia, o país mais populoso do mundo. E há situações locais ainda mais precárias. DAMNY: É o caso da Irlanda. Segundo reportagem do New York Times publicada em outubro passado, espera-se que o consumo de energia elétrica por data centers por lá represente pelo menos 30% do consumo total do país nos próximos anos. Mas porquê os datacenters consomem tanta energia? TAMARA: [ Então, particularmente com o tipo de IA que as empresas estão investindo agora, há uma necessidade de chips e GPUs muito mais poderosos, de modo que os data centers também são sobre prover energia o suficiente pra todo esse poder computacional que demandam o treinamento e uso de grandes modelos de linguagem. Os data centers são estruturas incrivelmente demandantes de energia e água. A água em geral serve para resfriar os servidores, então tem um número considerável de sistemas de cooling que usam água. Além disso tudo, você também precisa de fontes alternativas de energia, porque algumas vezes, uma infraestrutura tão demandante de energia precisa recorrer a geradores para garantir que o data center continue funcionando caso haja algum problema na rede elétrica. ] So, you know, particularly with the kinds of AI that companies are investing in right now, there’s a need for more powerful chips, GPUs, and so Data centers are also about providing enough energy and computational power for these powerful language models to be trained and then used. And so the data center also, you know, in part because it does require so much energy, and it’s just this incredibly energy-intensive thing, you also need water. And the water comes from having to cool the servers, and so… So there are a number of different cooling systems that use water. And then on top of that, you also need backup energy sources, so sometimes, because there’s such a draw on the power grid, you have to have backup generators to make sure that the data center can keep going if something happens with the grid. YAMA: E aqui a gente começa a entender o tamanho do problema. Os data centers são muitas vezes construídos em lugares que já sofrem com infraestruturas precárias de eletricidade e com a falta de água potável. Então eles criam problemas de escassez onde não havia e aprofundam essa escassez em locais onde isso já era uma grande questão – como a região metropolitana de Fortaleza sobre a qual falaremos no próximo episódio, que está em vias de receber um enorme data center do Tiktok. DAMNY: É o que também relatam os moradores de Querétaro, no México, que vivem na região dos data centers da Microsoft. A operação dos data centers da Microsoft gerou uma crise sem precedentes, com quedas frequentes de energia e o interrompimento do abastecimento de água que muitas vezes duram semanas. Os data-centers impactaram de tal forma as comunidades que escolas cancelaram aulas e, indiretamente, foram responsáveis por uma crise de gastroenterite entre crianças. YAMA: E isso nos leva pro segundo ponto. O consumo de água, minerais críticos e outros recursos naturais. TAMARA: [O problema da energia tem recebido mais atenção, porque é uma fonte de ansiedade também. Pensar sobre o aumento da demanda de energia em tempos em que supostamente estaríamos transicionando para deixar de usar energias fósseis, o que obviamente pode ter efeitos devastadores. Mas eu acredito que num nível mais local, o consumo de água é mais relevante. Nós temos grandes empresas indo às áreas rurais do México, por exemplo, e usando toda a água disponível e basicamente deixando as pessoas sem água. E isso é incrivelmente problemático. Então isso acontece em áreas que já tem problemas de abastecimento de água, onde as pessoas já não tem muito poder de negociação com as empresas. Não têm poder político pra isso. São lugares tratados como zonas de sacrifício, algo que já vimos muitas vezes no mundo, especialmente em territórios indígenas. Então as consequências são na verdade muito maiores do que só problemas relacionados à energia. ] I think the energy problem has probably gotten the most attention, just because it is a source of anxiety, too, so thinking about, you know, energy demand at a time when we’re supposed to be transitioning away from fossil fuels. And clearly, the effects that that can have will be devastating. But I think on a local level, things like the water consumption can matter more. So, you know, if we have tech companies moving into rural areas in Mexico and, you know, using up all of their water and basically preventing people in the town from having access to water. That is incredibly problematic. So I think, you know, in water-stressed areas and areas where the people living in a place don’t have as much negotiating power with the company. Don’t have as much political power, and especially if places are basically already treated as sacrifice zones, which we’ve seen repeatedly many places in the world, with Indigenous land in particular, you know, I think the consequences may go far beyond just thinking about, you know, the immediate kind of energy-related problems. YAMA: Existem pelo menos quatro fins que tornam os data centers máquinas de consumir água. O mais direto e local é a água utilizada na refrigeração de todo equipamento que ganha temperatura nas atividades de computação, o processo conhecido como cooling. Essa prática frequentemente utiliza água potável. Apesar de já ser extremamente relevante do ponto de vista de consumo, essa é apenas uma das formas de consumo abundante de água. DAMNY: Indiretamente, os data centers também consomem a água relacionada ao seu alto consumo de energia, em especial na geração de energia elétrica em usinas hidrelétricas e termelétricas. Também atrelada ao consumo energético, está o uso nas estações de tratamento de água, que visam tratar a água com resíduos gerada pelo data center para tentar reduzir a quantidade de água limpa utilizada. YAMA: Por fim, a cadeia de suprimentos de chips e servidores que compõem os data centers requer água ultrapura e gera resíduos químicos. Ainda que se saiba que esse fator gera gastos de água e emissões de carbono relevantes, os dados são super obscuros, entre outros motivos, porque a maioria dos dados que temos sobre o consumo de água em data centers são fornecidos pelas próprias empresas. CYNTHIA: A água e os minérios são componentes também basilares para as estruturas de datacenter, que são basilares para o funcionamento da inteligência artificial. (…). E tem toda uma questão, como eu disse muitas vezes, captura um volume gigante de água doce. E essa água que é retornada para o ecossistema, muitas vezes não é compensada da água que foi capturada. Só que as empresas também têm uma promessa em alguns relatórios, você vai ver que elas têm uma promessa até de chegar em algum ponto para devolver cento e vinte por cento da água. Então a empresa está se comprometendo a devolver mais água do que ela capturou. Só que a realidade é o quê? É outra. Então, a Google, por exemplo, nos últimos cinco anos, reportou um aumento de cento e setenta e sete por cento do uso de água. A Microsoft mais trinta e oito e a Amazon sequer reporta o volume de consumo de água. Então uma lacuna tremenda para uma empresa desse porte, considerando todo o setor de Data centers. Mas tem toda essa questão da água, que é muito preocupante, não só por capturar e o tratamento dela e como ela volta para o meio ambiente, mas porque há essa disputa também com territórios que têm uma subsistência muito específica de recursos naturais, então existe uma disputa aí por esse recurso natural entre comunidade e empreendimento. DAMNY: Nessa fala da Cynthia a gente observa duas coisas importantes: a primeira é que não existe data center sem água para resfriamento, de modo que o impacto local da instalação de um empreendimento desses é uma certeza irrefutável. E é um dano contínuo. Enquanto ele estiver em operação ele precisará da água. É como se uma cidade de grande porte chegasse de repente, demandando uma quantidade de água e energia que o local simplesmente não tem para oferecer. E na hora de escolher entre as pessoas e empreendimentos multimilionários, adivinha quem fica sem água e com a energia mais cara? YAMA: A segunda coisa importante que a Cynthia fala é quando ela nos chama a atenção sobre a demanda por recursos naturais. Nós sabemos que recursos naturais são escassos. Mais do que isso, recursos naturais advindos da mineração têm a sua própria forma de impactos sociais e ambientais, o que vemos frequentemente na Amazônia brasileira. O que acontecerá com os data centers quando os recursos naturais locais já não forem suficientes para seu melhor funcionamento? Diante de uma computação que passa por constante renovação pela velocidade da obsolescência, o que acontece com o grande volume de lixo eletrônico gerado por data centers? Perguntas que não têm resposta. DAMNY: A crise geopolítica em torno dos minerais conhecidos como terra-rara mostra a complexidade política e ambiental do futuro das IA do ponto de vista material e das suas cadeias de suprimento. No estudo feito pelo LAPIN, a Cynthia nos disse que considera que esse ponto do aumento da demanda por minerais críticos que as IA causam é um dos pontos mais opacos nas comunicações das grandes empresas de tecnologia sobre o impacto de seus data centers. CYNTHIA: E outro ponto de muita, muita lacuna, que eu acho que do nosso mapeamento, desses termos mais de recursos naturais. A cadeia de extração mineral foi o que mais foi opaco, porque, basicamente, as empresas não reportam nada sobre essa extração mineral e é muito crítico, porque a gente sabe que muitos minérios vêm também de zonas de conflito. Então as grandes empresas, pelo menos as três que a gente mapeou, elas têm ali um trechinho sobre uma prestação de contas da cadeia mineral. Tudo que elas fazem é falar que elas seguem um framework específico da OCDE sobre responsabilização. YAMA: Quando as empresas falam de usar energias limpas e de reciclar a água utilizada, eles estão se desvencilhando das responsabilidades sobre seus datacenters. Energia limpa não quer dizer ausência de impacto ambiental. Pras grandes empresas, as fontes de energia limpa servem para gerar excedente e não para substituir de fato energias fósseis. Você pode ter um data center usando majoritariamente energia solar no futuro, mas isso não muda o fato de que ele precisa funcionar 24/7 e as baterias e os geradores a diesel estarão sempre lá. Além disso, usinas de reciclagem de água, fazendas de energia solar e usinas eólicas também têm impactos socioambientais importantes. O uso de recursos verdes complexifica o problema de identificar os impactos locais e responsabilidades dos data centers, mas não resolve de nenhuma forma os problemas de infraestrutura e de fornecimento de água e energia causados pelos empreendimentos. DAMNY: É por isso que a gente alerta pra não comprar tão facilmente a história de que cada pergunta pro chatGPT gasta x litros de água. Se você não perguntar nada pro chatGPT hoje, ou se fizer 1000 perguntas, não vai mudar em absolutamente nada o alto consumo de água e os impactos locais destrutivos dos data centers que estão sendo instalados a todo vapor em toda a América Latina. A quantidade de dados e de computação que uma big tech usa para treinar seus modelos, por exemplo, jamais poderá ser equiparada ao consumo individual de chatbots. É como comparar as campanhas que te pedem pra fechar a torneira ao escovar os dentes, enquanto o agro gasta em minutos água que você não vai gastar na sua vida inteira. Em resumo, empresas como Google, Microsoft, Meta e Amazon só se responsabilizam pelos impactos diretamente causados por seus data centers e, mesmo assim, é uma responsabilização muito entre aspas, à base de greenwashing. Você já ouviu falar de greenwashing? CYNTHIA: Essa expressão em inglês nada mais é do que a tradução literal, que é o discurso verde. (…)É justamente o que a gente está conversando. É justamente quando uma empresa finge se preocupar com o meio ambiente para parecer sustentável, mas, na prática, as ações delas não trazem esses benefícios reais e, pelo contrário, às vezes trazem até danos para o meio ambiente. Então, na verdade, é uma forma até de manipular, ou até mesmo enganar as pessoas, os usuários daqueles sistemas ou serviços com discursos e campanhas com esses selos verdes, mas sem comprovar na prática. YAMA: Nesse contexto, se torna primordial que a gente tenha mais consciência de toda a infraestrutura material que está por trás da inteligência artificial. Como nos resumiu bem a Tamara: TAMARA: [ Eu acredito que ter noção da infraestrutura completa que envolve a cadeia da IA realmente ajuda a entender a situação. Mesmo que você esteja usando, supostamente, energia renovável para construir e operar um data center, você ainda vai precisar de muitos outros materiais, chips, minerais e outras coisas com suas próprias cadeias de suprimento. Ou seja, independente da forma de energia utilizada, você ainda vai causar dano às comunidades e destruição ambiental. ] But that… I think that is why having a sense of the entire AI supply chain is really helpful, just in terms of thinking about, you know, even if you’re, in theory, using renewable energy to build a data center, you still are relying on a lot of other materials, including chips, including minerals, and other things that. (…) We’re still, you know, possibly going to be harming communities and causing environmental disruption. [ tom baixo ] YAMA: Antes de a gente seguir pro último bloco, eu queria só dizer que a entrevista completa com a Dra. Tamara Kneese foi bem mais longa e publicada na íntegra no blog do GEICT. O link para a entrevista tá na descrição do episódio, mas se você preferir pode ir direto no bloco do GEICT. [ tom baixo ] VOICE OVER: BLOCO 3 – PROBLEMAS GLOBAIS, PROBLEMAS LOCAIS YAMA: Mesmo conhecendo as cadeias, as estratégias de greenwashing trazem um grande problema à tona, que é uma espécie de terceirização das responsabilidades. As empresas trazem medidas compensatórias que não diminuem em nada o impacto local dos seus data centers. Então tem uma classe de impactos que são globais, como as emissões de carbono e o aumento da demanda por minerais críticos, por exemplo. E globais no sentido de que eles são parte relevante dos impactos dos data centers, mas não estão impactando exatamente nos locais onde foram construídos. CYNTHIA: Google, por exemplo, nesse recorte que a gente fez da pesquisa dos últimos cinco anos, ela simplesmente reportou um aumento de emissão de carbono em setenta e três por cento. Não é pouca coisa. A Microsoft aumentou no escopo dois, que são as emissões indiretas, muito por conta de data centers, porque tem uma diferenciação por escopo, quando a gente fala de emissão de gases, a Microsoft, nesse período de cinco anos, ela quadruplicou o tanto que ela tem emitido. A Amazon aumentou mais de trinta por cento. Então a prática está mostrando que essas promessas estão muito longe de serem atingidas. Só que aí entra um contexto mais de narrativa. Por que elas têm falado e prometido a neutralidade de carbono? Porque há um mecanismo de compensação. (…) Então elas falam que estão correndo, correndo para atingir essa meta de neutralidade de carbono, mas muito por conta dos instrumentos de compensação, compensação ou de crédito de carbono ou, enfim, para uso de energias renováveis. Então se compra esse certificado, se fazem esses contratos, mas, na verdade, não está tendo uma redução de emissão. Está tendo uma compensação. (…) Essa compensação é um mecanismo financeiro, no final do dia. Porque, quando você, enquanto empresa, trabalha na compensação dos seus impactos ambientais e instrumentos contratuais, você está ignorando o impacto local. Então, se eu estou emitindo impactando aqui o Brasil, e estou comprando crédito de carbono em projetos em outra área, o impacto local do meu empreendimento está sendo ignorado. YAMA: E os impactos materiais locais continuam extremamente relevantes. Além do impacto nas infraestruturas locais de energia e de água sobre as quais a gente já falou, há muitas reclamações sobre a poluição do ar gerada pelos geradores, as luzes que nunca desligam e até mesmo a poluição sonora. A Tamara nos contou de um caso curioso de um surto de distúrbios de sono e de enxaqueca que tomou regiões de data centers nos Estados Unidos. TAMARA: [ Uma outra coisa que vale ser lembrada: as pessoas que vivem perto dos data centers tem nos contado que eles são super barulhentos, eles também relatam a poluição visual causada pelas luzes e a poluição sonora. Foi interessante ouvir de comunidades próximas a data centers de mineração de criptomoedas, por exemplo, que os moradores começaram a ter enxaquecas e distúrbios de sono por viverem próximos das instalações. E além de tudo isso, ainda tem a questão da poluição do ar, que é visível a olho nu. Há muitas partículas no ar onde há geradores movidos a diesel para garantir que a energia esteja sempre disponível. ] And the other thing is, you know, for people who live near them, they’re very loud, and so if you talk to people who live near data centers, they will talk about the light pollution, the noise pollution. And it’s been interesting, too, to hear from communities that are near crypto mining facilities, because they will complain of things like migraine headaches and sleep deprivation from living near the facilities. And, you know, the other thing is that the air pollution is quite noticeable. So there’s a lot of particulate matter, particularly in the case of using diesel-fueled backup generators as an energy stopgap. DAMNY: E do ponto de vista dos impactos locais, há um fator importantíssimo que não pode ser esquecido: território. Data centers podem ser gigantes, mas ocupam muito mais espaço que meramente seus prédios, porque sua cadeia de suprimentos demanda isso. Como a água e a energia chegarão até os prédios? Mesmo que sejam usados fontes renováveis de energia, onde serão instaladas as fazendas de energia solar ou as usinas de energia eólica e de tratamento de água? Onde a água contaminada e/ou tratada será descartada? Quem vai fiscalizar? YAMA: E essa demanda sem fim por território esbarra justamente nas questões de racismo ambiental. Porque os territórios que são sacrificados para que os empreendimentos possam funcionar, muito frequentemente, são onde vivem povos originários e populações marginalizadas. Aqui percebemos que a resistência local contra a instalação de data centers é, antes de qualquer coisa, uma questão de justiça ambiental. É o caso de South Memphis nos Estados Unidos, por exemplo. TAMARA: [ Pensando particularmente sobre os tipos de danos causados pelos data centers, não é somente a questão da conta de energia ficar mais cara, ou quantificar a quantidade de energia e água gasta por data centers específicos. A verdadeira questão, na minha opinião, é a relação que existe entre esses danos socioambientais, danos algorítmicos e o racismo ambiental e outras formas de impacto às comunidades que lidam com isso a nível local. Especialmente nos Estados Unidos, com todo esse histórico de supremacia branca e a falta de direitos civis, não é coincidência que locais onde estão comunidades negras, por exemplo, sejam escolhidos como zonas de sacrifício. As comunidades negras foram historicamente preferenciais para todo tipo de empreendimento que demanda sacrificar território, como estradas interestaduais, galpões da Amazon… quer dizer, os data centers são apenas a continuação dessa política histórica de racismo ambiental. E tudo isso se soma aos péssimos acordos feitos a nível local, onde um prefeito e outras lideranças governamentais pensam que estão recebendo algo de grande valor econômico. Em South Memphis, por exemplo, o data center é da xAI. Então você para pra refletir como essa plataforma incrivelmente racista ainda tem a audácia de poluir terras de comunidades negras ainda mais ] I think, the way of framing particular kinds of harm, so, you know, it’s not just about, you know, people’s energy bills going up, or, thinking about how we quantify the energy use or the water use of particular data centers, but really thinking about the relationship between a lot of those social harms and algorithmic harms and the environmental racism and other forms of embodied harms that communities are dealing with on that hyper-local level. And, you know, in this country, with its history of white supremacy and just general lack of civil rights, you know, a lot of the places where Black communities have traditionally been, tend to be, you know, the ones sacrificed for various types of development, like, you know, putting up interstates, putting up warehouses for Amazon and data centers are just a continuation of the what was already happening. And then you have a lot of crooked deals on the local level, where, you know, maybe a mayor and other local officials think that they’re getting something economically of value. In South Memphis, the data center is connected to x AI. And so thinking about this platform that is so racist and so incredibly harmful to Black communities, you know, anyway, and then has the audacity to actually pollute their land even more. DAMNY: Entrando na questão do racismo ambiental a gente se encaminha para o nosso segundo episódio, onde vamos tentar entender como o Brasil se insere na questão dos data centers e como diferentes setores da população estão se organizando para resistir. Antes de encerrar esse episódio, contudo, a gente traz brevemente pra conversa dois personagens que vão ser centrais no próximo episódio. YAMA: Eles nos ajudam a compreender como precisamos considerar a questão dos territórios ao avaliar os impactos. Uma dessas pessoas é a Andrea Camurça, do Instituto Terramar, que está lutando junto ao povo Anacé pelo direito de serem consultados sobre a construção de um data center do TIKTOK em seus territórios. Eu trago agora um trechinho dela falando sobre como mesmo medidas supostamente renováveis se tornam violações territoriais num contexto de racismo ambiental. ANDREA: A gente recebeu notícias agora, recentemente, inclusive ontem, que está previsto um mega empreendimento solar que vai ocupar isso mais para a região do Jaguaribe, que vai ocupar, em média, de equivalente a seiscentos campos de futebol. Então, o que isso representa é a perda de terra. É a perda de água. É a perda do território. É uma diversidade de danos aos povos e comunidades tradicionais que não são reconhecidos, são invisibilizados. Então é vendido como território sem gente, sendo que essas energias chegam dessa forma. Então, assim a gente precisa discutir sobre energias renováveis. A gente precisa discutir sobre soberania energética. A gente precisa discutir sobre soberania digital, sim, mas construída a partir da necessidade do local da soberania dessas populações. DAMNY: A outra pessoa que eu mencionei é uma liderança Indígena, o cacique Roberto Anacé. Fazendo uma ótima conexão que nos ajuda a perceber como os impactos globais e locais dos data centers estão conectados, ele observa como parecemos entrar num novo momento do colonialismo, onde a soberania digital e ambiental do Brasil volta a estar em risco, indo de encontro à violação de terras indígenas. CACIQUE ROBERTO: Há um risco para a questão da biodiversidade, da própria natureza da retirada da água, do aumento de energia, mas também não somente para o território da Serra, mas para todos que fazem uso dos dados. Ou quem expõe esses dados. Ninguém sabe da mão de quem vai ficar, quem vai controlar quem vai ordenar? E para que querem essa colonização? Eu chamo assim que é a forma que a gente tem essa colonização de dados. Acredito eu que a invasão do Brasil em mil e quinhentos foi de uma forma. Agora nós temos a invasão de nossas vidas, não somente para os indígenas, mas de todos, muitas vezes que fala muito bem, mas não sabe o que vai acontecer depois que esses dados estão guardados. Depois que esses dados vão ser utilizados, para que vão ser utilizados, então esses agravos. Ele é para além do território indígena na série. [ tom baixo ] [ Começa Bio Unit ] YAMA: A pesquisa, entrevistas e apresentação desse episódio foi feita pelo Damny Laya e por mim, Yama Chiodi. Eu também fiz o roteiro e a produção. Quem narrou a tradução das falas da Tamara foi Mayra Trinca. O Oxigênio é um podcast produzido pelos alunos do Laboratório de Estudos Avançados em Jornalismo da Unicamp e colaboradores externos. Tem parceria com a Secretaria Executiva de Comunicação da Unicamp e apoio do Serviço de Auxílio ao Estudante, da Unicamp. Além disso, contamos com o apoio da FAPESP, que financia bolsas como a que nos apoia neste projeto de divulgação científica. DAMNY: A lista completa de créditos para os sons e músicas utilizados você encontra na descrição do episódio. Você encontra todos os episódios no site oxigenio.comciencia.br e na sua plataforma preferida. No Instagram e no Facebook você nos encontra como Oxigênio Podcast. Segue lá pra não perder nenhum episódio! Aproveite para deixar um comentário. [ Termina Bio Unit ] [ Vinheta Oxigênio ] Créditos: Aerial foi composta por Bio Unit; Documentary por Coma-Media. Ambas sob licença Creative Commons. Os sons de rolha e os loops de baixo são da biblioteca de loops do Garage Band. Roteiro, produção: Yama Chiodi Pesquisa: Yama Chiodi, Damny Laya Narração: Yama Chiodi, Danny Laya, Mayra Trinca Entrevistados: Tamara Kneese, Cynthia Picolo, Andrea Camurça e Cacique Roberto Anacé __________ Descendo a toca do coelho da IA: Data Centers e os Impactos Materiais da “Nuvem” – Uma entrevista com Tamara Kneese: https://www.blogs.unicamp.br/geict/2025/11/06/descendo-a-toca-do-coelho-da-ia-data-centers-e-os-impactos-materiais-da-nuvem-uma-entrevista-com-tamara-kneese/ Não somos quintal de data centers: Um estudo sobre os impactos socioambientais e climáticos dos data centers na América Latina: https://idec.org.br/publicacao/nao-somos-quintal-de-data-centers Outras referências e fontes consultadas: Relatórios técnicos e dados oficiais: IEA (2025), Energy and AI, IEA, Paris https://www.iea.org/reports/energy-and-ai, Licence: CC BY 4.0 “Inteligência Artificial e Data Centers: A Expansão Corporativa em Tensão com a Justiça Socioambiental”. Lapin. https://lapin.org.br/2025/08/11/confira-o-relatorio-inteligencia-artificial-e-data-centers-a-expansao-corporativa-em-tensao-com-a-justica-socioambiental/ Estudo de mercado sobre Power & Cooling de Data Centers. DCD – DATA CENTER DYNAMICS.https://media.datacenterdynamics.com/media/documents/Report_Power__Cooling_2025_PT.pdf Pílulas – Impactos ambientais da Inteligência Artificial. IPREC. https://ip.rec.br/publicacoes/pilulas-impactos-ambientais-da-inteligencia-artificial/ Policy Brief: IA, data centers e os impactos ambientais. IPREC https://ip.rec.br/wp-content/uploads/2025/05/Policy-Paper-IA-e-Data-Centers.pdf MEDIDA PROVISÓRIA Nº 1.318, DE 17 DE SETEMBRO DE 2025 https://www.in.gov.br/en/web/dou/-/medida-provisoria-n-1.318-de-17-de-setembro-de-2025-656851861 Infográfico sobre minerais críticos usados em Data Centers do Serviço de Geologia do Governo dos EUA https://www.usgs.gov/media/images/key-minerals-data-centers-infographic Notícias e reportagens: From Mexico to Ireland, Fury Mounts Over a Global A.I. Frenzy. Paul Mozur, Adam Satariano e Emiliano Rodríguez Mega. The New York Times, 20/10/2025. https://www.nytimes.com/2025/10/20/technology/ai-data-center-backlash-mexico-ireland.html Movimentos pedem ao MP fim de licença de data center no CE. Maristela Crispim, EcoNordeste. 25/08/2025. https://agenciaeconordeste.com.br/sustentabilidade/movimentos-pedem-ao-mp-fim-de-licenca-de-data-center-no-ce/#:~:text=’N%C3%A3o%20somos%20contra%20o%20progresso’&text=Para%20o%20cacique%20Roberto%20Anac%C3%A9,ao%20meio%20ambiente%E2%80%9D%2C%20finaliza. ChatGPT Is Everywhere — Why Aren’t We Talking About Its Environmental Costs? Lex McMenamin. Teen Vogue. https://www.teenvogue.com/story/chatgpt-is-everywhere-environmental-costs-oped Data centers no Nordeste, minérios na África, lucros no Vale do Silício. Le Monde Diplomatique, 11 jun. 2025. Accioly Filho. https://diplomatique.org.br/data-centers-no-nordeste-minerios-na-africa-lucros-no-vale-do-silicio/. The environmental footprint of data centers in the United States. Md Abu Bakar Siddik et al 2021 Environ. Res. Lett. 16064017: https://iopscience.iop.org/article/10.1088/1748-9326/abfba1 Tecnología en el desierto – El debate por los data centers y la crisis hídrica en Uruguay. MUTA, 30 nov. Soledad Acunã https://mutamag.com/cyberpunk/tecnologia-en-el-desierto/. Acesso em: 17 set. 2025. Las zonas oscuras de la evaluación ambiental que autorizó “a ciegas” el megaproyecto de Google en Cerrillos. CIPER Chile, 25 maio 2020. https://www.ciperchile.cl/2020/05/25/las-zonas-oscuras-de-la-evaluacion-ambiental-que-autorizo-aciegas-el-megaproyecto-de-google-en-cerrillos/. Acesso em: 17 set. 2025. Thirsty data centres spring up in water-poor Mexican town. Context, 6 set. 2024. https://www.context.news/ai/thirsty-data-centres-spring-up-in-water-poor-mexican-town BNDES lança linha de R$ 2 bilhões para data centers no Brasil. https://agenciadenoticias.bndes.gov.br/industria/BNDES-lanca-linha-de-R$-2-bilhoes-para-data-centersno-Brasil/. Los centros de datos y sus costos ocultos en México, Chile, EE UU, Países Bajos y Sudáfrica. WIRED, 29 maio 2025. Anna Lagos https://es.wired.com/articulos/los-costos-ocultos-del-desarrollo-de-centros-de-datos-en-mexico-chile-ee-uu-paises-bajos-y-sudafrica Big Tech's data centres will take water from world's driest areas. Eleanor Gunn. SourceMaterial, 9 abr. 2025. https://www.source-material.org/amazon-microsoft-google-trump-data-centres-water-use/ Indígenas pedem que MP atue para derrubar licenciamento ambiental de data center do TikTok. Folha de S.Paulo, 26 ago. 2025. https://www1.folha.uol.com.br/mercado/2025/08/indigenas-pedem-que-mp-atue-para-derrubar-licenciamento-ambiental-de-data-center-do-tiktok.shtml The data center boom in the desert. MIT Technology Review https://www.technologyreview.com/2025/05/20/1116287/ai-data-centers-nevada-water-reno-computing-environmental-impact/ Conferências, artigos acadêmicos e jornalísticos: Why are Tech Oligarchs So Obsessed with Energy and What Does That Mean for Democracy? Tamara Kneese. Tech Policy Press. https://www.techpolicy.press/why-are-tech-oligarchs-so-obsessed-with-energy-and-what-does-that-mean-for-democracy/ Data Center Boom Risks Health of Already Vulnerable Communities. Cecilia Marrinan. Tech Policy Press. https://www.techpolicy.press/data-center-boom-risks-health-of-already-vulnerable-communities/ RARE/EARTH: The Geopolitics of Critical Minerals and the AI Supply Chain. https://www.youtube.com/watch?v=GxVM3cAxHfg Understanding AI with Data & Society / The Environmental Costs of AI Are Surging – What Now? https://www.youtube.com/watch?v=W4hQFR8Z7k0 IA e data centers: expansão corporativa em tensão com justiça socioambiental. Camila Cristina da Silva, Cynthia Picolo G. de Azevedo. https://www.jota.info/opiniao-e-analise/colunas/ia-regulacao-democracia/ia-e-data-centers-expansao-corporativa-em-tensao-com-justica-socioambiental LI, P.; YANG, J.; ISLAM, M. A.; REN, S. Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. arXiv, 2304.03271, 26 mar. 2025. Disponível em: https://doi.org/10.48550/arXiv.2304.03271 LIU, Y.; WEI, X.; XIAO, J.; LIU, Z.;XU, Y.; TIAN, Y. Energy consumption and emission mitigation prediction based on data center traffic and PUE for global data centers. Global Energy Interconnection, v. 3, n.3, p. 272-282, 3 jun. 2020. https://doi.org/10.1016/j.gloei.2020.07.008 SIDDIK, M. A. B.; SHEHABI, A.; MARSTON, L. The environmental footprint of data centers in the United States. Environmental Research Letters, v. 16, n. 6, 21 maio 2021. https://doi.org/10.1088/1748-9326/abfba1 Las Mentiras de Microsoft en Chile: Una Empresa No tan Verde. Por Rodrigo Vallejos de Resistencia Socioambiental de Quilicura. Revista De Frente, 18 mar. 2022. https://www.revistadefrente.cl/las-mentiras-de-microsoft-en-chile-una-empresa-no-tan-verde-porrodrigo-vallejos-de-resistencia-socioambiental-de-quilicura/. Acesso em: 17 set. 2025.
In this panel session from the 2025 Data Center Frontier Trends Summit (Aug. 26-28) in Reston, Va., JLL's Sean Farney moderates a high-energy panel on how the industry is fast-tracking AI capacity in a world of power constraints, grid delays, and record-low vacancy. Under the banner “Scaling AI: The Role of Adaptive Reuse and Power-Rich Sites in GPU Deployment,” the discussion dives into why U.S. colocation vacancy is hovering near 2%, how power has become the ultimate limiter on AI revenue, and what it really takes to stand up GPU-heavy infrastructure at speed. Schneider Electric's Lovisa Tedestedt, Aligned Data Centers' Phill Lawson-Shanks, and Sapphire Gas Solutions' Scott Johns unpack the real-world strategies they're deploying today—from adaptive reuse of industrial sites and factory-built modular systems, to behind-the-fence natural gas, microgrids, and emerging hydrogen and RNG pathways. Along the way, they explore the coming “AI inference edge,” the rebirth of the enterprise data center, and how AI is already being used to optimize data center design and operations. During this talk, you'll learn: * Why record-low vacancy and long interconnection queues are reshaping AI deployment strategy. * How adaptive reuse of legacy industrial and commercial real estate can unlock gigawatt-scale capacity and community benefits. * The growing role of liquid cooling, modular skids, and grid-to-chip efficiency in getting more power to GPUs. * How behind-the-meter gas, virtual pipelines, and microgrids are bridging multi-year grid delays. * Why many experts expect a renaissance of enterprise data centers for AI inference at the edge. Moderator: Sean Farney, VP, Data Centers, Jones Lang LaSalle (JLL) Panelists: Tony Grayson, General Manager, Northstar Lovisa Tedestedt, Strategic Account Executive – Cloud & Service Providers, Schneider Electric Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers Scott Johns, Chief Commercial Officer, Sapphire Gas Solutions
My guest this week is Gavin Baker. Gavin is the managing partner and CIO of Atreides Management, and he has been on the show many times before. I will never forget when I first met Gavin in 2017. I find his interest in markets, his curiosity about the world to be as infectious as any investor that I've ever come across. He's encyclopedic on what is going on in the world of technology today, and I've had the good fortune to host him every year or two on this podcast. Gavin began covering Nvidia as an investor more than two decades ago, giving him a rare perspective on how the company – and the entire semiconductor ecosystem – has evolved. A lot has changed since our last conversation a year ago, making this the perfect time to revisit the topic. In this conversation, we talk about everything that interests Gavin – Nvidia's GPUs, Google's TPUs, the changing AI landscape, the math and business models around AI companies and everything in between. We also discussed the idea of data centers in space, which he communicates with his usual passion and logic. In closing, at the end of this conversation, because I've asked him my traditional closing question before, I asked him a different question, which led to a discussion of his entire investing origin story that I had never heard before. Because Gavin is one of the most passionate thinkers and investors that I know, these conversations are always amongst my most favorite. I hope you enjoy this latest in the series of discussions with Gavin Baker. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Ramp. Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to ramp.com/invest to sign up for free and get a $250 welcome bonus. ----- This episode is brought to you by Ridgeline. Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Head to ridgelineapps.com to learn more about the platform. ----- This episode is brought to you by AlphaSense. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Invest Like the Best listeners can get a free trial now at Alpha-Sense.com/Invest and experience firsthand how AlphaSense and Tegus help you make smarter decisions faster. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes: (00:00:00) Welcome to Invest Like The Best (00:04:00) Meet Gavin Baker (00:06:00) Understanding Gemini 3 (00:09:05) Scaling Laws for Pre-Training (00:12:12) Google v. Nvidia (00:16:52) Google as Lowest Cost Producer of Tokens (00:28:05) AI Can Automate Anything that can be Verified (00:34:30) The AI Bear Case: Edge AI (00:37:18) Going from Intelligence to Usefulness (00:43:44) AI Adoption in Fortune 500 Companies (00:48:58) Frontier Models and Industry Dynamics (00:56:40) China's Mistake and Blackwell's Geopolitical Leverage (00:57:50) OpenAI's Code Red (01:00:46) Data Centers in Space (01:07:13) Cycles in AI (01:11:10) Power as a Bottleneck (01:14:17) AI Native Entrepreneurs (01:16:21) Semiconductor VC (01:20:41) The Mistake the SaaS Industry is Making (01:26:50) Series of Bubbles (01:28:56) Whatever AI Needs, It Gets (01:29:57) Investing is the Search for Truth (01:31:24) Gavin's Investing Origin Story
In this conversation, Rob Jones, Area Vice President of Sales at Chatsworth Products (CPI), discusses the critical need for advanced cooling solutions in data centers driven by the convergence of AI and high-performance computing (HPC). He highlights CPI's innovative liquid cooling technologies, which are essential for managing the increasing heat output from modern GPUs. The discussion also covers the types of organizations adopting CPI's solutions and how CPI is future-proofing infrastructure to accommodate evolving technology demands.Thank you to Chatsworth Products/CPI for their sponsorship of the podcast. Their expertise and support helps us to the keep the lights on here at the podcast. Please check out their selection of products on Graybar's website or reach out to your local Graybar representative to learn how CPI can help keep your data center efficient and cool.https://www.graybar.com/manufacturers/chatsworth/c/sup-chatsworth-group?utm_source=podcast&utm_medium=show-notes&utm_campaign=Sponsor-Highlight-2025-ChatsworthYouTube link: https://youtu.be/GDTl_Mux9SQ
How can you scale AI at the enterprise, yet still hit your climate goals? And can heavy AI usage and an enterprise's ESG mission co-exist? Ashutosh Ahuja lays it out for us. Aligning AI With Climate And Business Goals -- An Everyday AI Chat with Jordan Wilson and Ashutosh AhujaNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion:Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:AI's Environmental Impact and Climate ConcernsCompanies Aligning AI with ESG GoalsAI Adoption Versus Carbon Footprint TradeoffsMetrics for Measuring AI's Environmental ImpactBusiness Efficiency Gains from AI AdoptionReal-World Examples: AI Offsetting Carbon FootprintIndustry Opportunities for Sustainable AI IntegrationFuture Trends: Efficient AI Models and Edge ComputingTimestamps:00:00 Everyday AI Podcast & Newsletter05:52 Balancing Progress and Legacy07:03 "Should Companies Limit AI Usage?"12:02 "Sentiment Analysis for Business Growth"17:07 "Energy Efficiency Impacts ESG Metrics"19:40 Robots, Energy, and AI Opportunity21:41 AI Efficiency and Climate Balance25:04 "Trust Instincts in Investments"Keywords:AI and climate, climate goals, aligning AI with ESG, environmental impact of AI, carbon footprint, energy use in AI data centers, water cooling for GPUs, sustainable business practices, enterprise AI strategy, ESG compliance, climate pledges, AI adoption in business, carbon footprint metrics, machine learning for sustainability, predictive analytics, ethical AI, green AI solutions, renewable energy sector, AI in waste management, camera vision for waste sorting, delivery robots, edge AI, small business AI implementation, AI efficiency, sentiment analysis, customer patterns, predictive maintenance, IoT data, auto scaling, cloud computing, resource optimization, SEC filings, brand sentiment tracking, LLM energy consumption, environmental considerations for AI, future of AI in climate action, business efficiency, human in the loop, philanthropic business practices, sustainable architecture, large language models and climate, tech industry climate initiatives, AI-powered resource savings, operational sustainability.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
This Week In Startups is made possible by:Northwest Registered Agent - https://www.northwestregisteredagent.com/twistCrusoe - http://crusoe.ai/buildGusto - https://www.gusto.com/twistToday's show: Delegating is its own unique skill, requiring training and a real investment of time and attention.On TWiST, Jason chats for a full hour with the founder of one of his favorite startups, Athena, which trains online assistants and pairs them with busy founders and executives. (Jason has 2!) But getting the MOST out of your executive assistants is less obvious than it looks. Jonathan unpacks some of the secrets to “Black Diamond Delegating,” and how he manages to keep 6 different high-level helpers operating at once.Plus, Jason and Jonathan look back at the Open Angel Forum days, where Jason invested in Jonathan's previous company, Thumbtack, praise the “Checklist Manifesto,” discuss the telltale signs you've achieved market pull, and lots more insights.Timestamps:(01:53) We're joined by Jonathan Swanson from one of JCal's fav startups, Athena!(02:02) Jason and Jonathan first met during the Open Angel Forum, when Jonathan was working on Thumbtack(06:44) Finding the “little touches” that can help make an app more delightful(9:47) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(12:05) The shift from Thumbtack to Athena was all about time(12:52) How Jonathan delegates to 6 exec assistants at once(14:22) Pricing Athena's EAs: Jason runs the numbers(15:09) Why Athena made Jason believe in hiring assistants again(18:15) Getting past the “Cardinal Sins of Delegation”(19:38) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(20:48) Will AI ever be able to replace Athena assistants?(23:41) Inside how Athena finds and trains assistants from around the world(27:01) How JCal became an Athena Ambassador… and almost crashed the system!(30:55) Gusto - Check out the online payroll and benefits experts with software built specifically for small business and startups. Try Gusto today and get three months FREE at https://www.gusto.com/twist(32:11) The magic of having assistants work on “backstop projects” and creative tasks(37:14) How to know when you have achieved market pull(40:05) Why getting the most out of delegating takes real investment and training(44:36) More praise for the Checklist Manifesto(46:26) Jonathan gives us a peek at what “Black Diamond Delegation” looks like(52:14) Jason's early experiences hiring overseas assistants, from the Mahalo days*Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcp*Follow Lon:X: https://x.com/lons*Follow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelm/*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis/*Thank you to our partners:(9:47) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(19:38) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(30:55) Gusto - Check out the online payroll and benefits experts with software built specifically for small business and startups. Try Gusto today and get three months FREE at https://www.gusto.com/twist
CoreWeave CFO Nitin Agrawal joins Run the Numbers to unpack the finance engine behind one of the fastest-growing AI infrastructure companies on the planet. CJ and Nitin dive into what it takes to build financial discipline in an environment where business models are being invented in real time, discussing the company's 700% growth last year and massive first-quarter performance as a newly public company. They cover capex strategy, securitizing GPUs, managing billion-dollar revenue backlogs, and structuring incentives for hyperscale deals, all while keeping investors grounded and servers running at full tilt. If you want a front-row seat to finance in the AI arms race, this episode delivers.—SPONSORS:Tipalti automates the entire payables process—from onboarding suppliers to executing global payouts—helping finance teams save time, eliminate costly errors, and scale confidently across 200+ countries and 120 currencies. More than 5,000 businesses already trust Tipalti to manage payments with built-in security and tax compliance. Visit https://www.tipalti.com/runthenumbers to learn more.Aleph automates 90% of manual, error-prone busywork, so you can focus on the strategic work you were hired to do. Minimize busywork and maximize impact with the power of a web app, the flexibility of spreadsheets, and the magic of AI. Get a personalised demo at https://www.getaleph.com/runFidelity Private Shares is the all-in-one equity management platform that keeps your cap table clean, your data room organized, and your equity story clear—so you never risk losing a fundraising round over messy records. Schedule a demo at https://www.fidelityprivateshares.com and mention Mostly Metrics to get 20% off.Sage Intacct is the cloud financial management platform that replaces spreadsheets, eliminates manual work, and keeps your books audit-ready—so you can scale without slowing down. It combines accounting, ERP, and real-time reporting for retail, financial services, logistics, tech, professional services, and more. Sage Intacct delivers fast ROI, with payback in under six months and up to 250% return. Rated #1 in customer satisfaction for eight straight years. Visit Sage Intacct and take control of your growth: https://bit.ly/3Kn4YHtMercury is business banking built for builders, giving founders and finance pros a financial stack that actually works together. From sending wires to tracking balances and approving payments, Mercury makes it simple to scale without friction. Join the 200,000+ entrepreneurs who trust Mercury and apply online in minutes at https://www.mercury.comRightRev automates the revenue recognition process from end to end, gives you real-time insights, and ensures ASC 606 / IFRS 15 compliance—all while closing books faster. For RevRec that auditors actually trust, visit https://www.rightrev.com and schedule a demo.—LINKS:Nitin on LinkedIn: https://www.linkedin.com/in/nitin-agrawal-cloudcfo/Company: https://www.coreweave.com/CJ on LinkedIn: https://www.linkedin.com/in/cj-gustafson-13140948/Mostly metrics: https://www.mostlymetrics.com—RELATED EPISODES:The Art and Science of a Day-One IPO Pop with OneStream Software CFO Bill Koefoedhttps://youtu.be/kYCn7XNkCBcFrom Facebook's Hypergrowth to Daffy's Disruption: A CFO's Playbook for Saying Yeshttps://youtu.be/bRIZ6oNPGD0—TIMESTAMPS:00:00:00 Preview and Intro00:02:54 Sponsors – Tipalti | Aleph | Fidelity Private Shares00:06:12 Interview Begins: Scaling CoreWeave00:06:52 CoreWeave's Pivot From Crypto to AI00:11:41 Why CoreWeave Is Uniquely Positioned to Lead AI Infrastructure00:13:32 Hiring for Both Scrappiness and Scale00:16:01 Post-IPO Whirlwind: Acquisitions, Debt Raises, and 10-Year Deals00:16:43 Sponsors – Sage Intacct | Mercury | RightRev00:20:13 Managing Investor Expectations With Radical Transparency00:22:39 Doubling Active Power in Six Months00:25:19 Risk-Balanced Capital Deployment: Power First, GPUs Second00:27:12 Financing GPUs With Delayed-Draw Facilities00:29:38 CoreWeave Rated Platinum for GPU Cluster Performance00:32:25 Compute as the Bottleneck for AI Growth00:33:47 Explaining Revenue Backlog Shape & Timing00:35:06 The Strength of Reserved Instance Contracts00:36:07 Giving Tight but Honest Guidance00:40:26 How Mega-Deals Require C-Suite Participation00:42:19 Tackling Revenue Concentration Through Diversification00:44:05 Building an AI-Only Cloud, Not a General-Purpose Cloud00:46:27 Capital Markets Muscle: Raising Billions at Speed00:47:47 Accounting Complexity in a Business With No Precedent00:49:33 Even the CFO Must Unlearn Old Cloud Assumptions00:51:29 Scaling Public-Company Processes in 90-Day Cycles00:54:42 The Couch Fire vs. House Fire Framework00:57:17 Balancing Risk Mitigation With Opportunity Seeking01:00:30 No Downtime for ERP Changes During Hypergrowth01:02:33 Why the Team Stays Energized Despite the Chaos#RunTheNumbersPodcast #CFOInsights #Hypergrowth #AIInfrastructure #FinanceStrategy This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit cjgustafson.substack.com
Send us a textWe break down Cloudflare's outage, why a small config change caused big waves, and what better guardrails could look like. We then unpack AWS and Google's cross‑cloud link, Megaport's move into bare metal and GPUs, Webex adding deepfake defenses, and a new startup aiming to tune AI networks at microsecond speed.• Cloudflare outage root cause and fallout• Automation guardrails, validation and rollbacks• AWS–Google cross‑cloud connectivity preview• Pricing, routing and policy gaps to watch• Megaport acquires Latitude SH for compute• Bare metal and GPU as a service near clouds• Webex integrates deepfake and fraud detection• Accuracy risks, UX and escalation paths• Apstra founders launch Aria for AI networks• Microburst telemetry, closed‑loop control and SLAsIf you enjoyed this please give us some feedback or share this with a friend we would love to hear from you as well and we will see you in two weeks with another episodePurchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Monthly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet's approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling. The complete show notes for this episode can be found at https://twimlai.com/go/757.
Darrick Horton is the CEO and co-founder of TensorWave, the company making waves in AI infrastructure by building high-performance compute on AMD chips. In 2023, he and his team took the unconventional path of bypassing Nvidia, a bold bet that has since paid off with nearly $150 million raised from Magnetar, AMD Ventures, Prosperity7, and others. TensorWave is now operating a dedicated training cluster of around 8,000 AMD Instinct MI325X GPUs and has already hit a $100 million revenue run rate. Darrick is a serial entrepreneur with a track record of building infrastructure companies. Before TensorWave, he co-founded VMAccel, sold Lets Rolo to LifeKey, and co-founded the crypto mining company VaultMiner. He began his career as a mechanical engineer and plasma physicist at Lockheed Martin's Skunk Works, where he worked on nuclear fusion energy. While he studied physics and mechanical engineering at Andrews University, he left early to pursue entrepreneurship and hasn't looked back since.In this conversation we discussed:Why Darrick chose AMD over Nvidia to build TensorWave's AI infrastructure, and how that decision created a competitive advantage in a GPU-constrained marketWhat makes training clusters more versatile than inference clusters, and why TensorWave focused on the former to meet broader customer needsHow Neocloud providers like TensorWave can move faster and innovate more effectively than legacy hyperscalers in deploying next-generation AI infrastructureWhy power, not GPUs, is becoming the biggest constraint in scaling AI workloads, and how data center architecture must evolve to address itWhy Darrick predicts AI architectures will continue to evolve beyond transformers, creating constant shifts in compute demandHow massive increases in model complexity are accelerating the need for green energy, tighter feedback loops, and seamless integration of compute into AI workflowsResources:Subscribe to the AI & The Future of Work NewsletterConnect with Darrick on LinkedInAI fun fact articleOn How the new definition of work
In this episode, Venkat Kirishnamurthy, a principal Architect at Cisco, explains what are the ins and outs of designin an AI datacenter. How is it different from another datacenter ? What kind of scale are we talking about ? What is the throughput required to connect 1000 GPUs together ? Learn more about CX Services and how we can help you design your datacenter: https://www.cisco.com/site/us/en/services/index.html
Flush de la semana con lo mejor en noticias que se dieron en la semanadéjame tu comentario Redes Sociales Oficiales:► https://linktr.ee/DrakSpartanOficialCualquier cosa o situación contactar a Diego Walker:diegowalkercontacto@gmail.comFecha Del Video[30-11-2025]#flush #amd #ram #nvidia #amd #conectores #gpu #fire #conectorgpu#drakspartan #drak #elflush
Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Black Friday secrets 2) Google may sell its TPUs to Meta and financial institutions 3) Nvidia sends an antsy tweet 4) How does Google's TPU stack up next to NVIDIA's GPUs 5) Could Google package the TPU with cloud services? 6) NVIDIA responds to the criticism 7) HSBC on how much OpenAI needs to earn to cover its investments 8) Thinking about OpenAI's advertising business 9) ChatGPT users lose touch with reality 10) Ilya Sustkever's mysterious product and revenue plans 11) X reveals our locations --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Send us a textIn this episode of The Skinny on Wall Street, Kristen and Jen unpack the story stirring up markets: Michael Burry's latest warning that Big Tech is overstating earnings by extending the “useful life” assumptions on their GPUs. The conversation becomes a real-time teach-in on depreciation, useful life estimates, GAAP vs. tax depreciation, and how a small shift in an accounting estimate can meaningfully inflate EPS—especially for mega-cap tech stocks that trade heavily on P/E multiples. Kristen walks through exactly how depreciation affects valuation, and why some metrics (like EBITDA) and methodologies (like the DCF) are untouched by the choice of useful life. The big question the duo wrestle with: is Burry identifying a real risk, or is this a nothingburger amplified by market paranoia? From there, Jen shifts to the fixed income landscape ahead of the December Fed meeting—one the central bank must navigate without key data (payrolls and CPI) that won't arrive until after the rate decision. She breaks down how Powell is managing optionality near the end of his term, how the market is pricing a December cut, and what a likely dovish successor (Kevin Hassett) could mean for rates in 2026. They also dig into credit markets: years of high coupons have fueled relentless reinvestment demand, but an uptick in issuance—especially from AI-heavy hyperscalers—may finally rebalance supply and demand. The duo look abroad as well, analyzing the UK's newly announced national property tax and what it signals about global fiscal stress.The episode wraps with big updates from The Wall Street Skinny: the long-awaited launch of their Financial Modeling Course, the continued fixed income course presale, and new January 2026 office hours, plus the return date for HBO's Industry (January 11!). To get 25% off all our self paced courses, use code BLACKFRIDAY25 at checkout!Learn more about 9fin HERE Shop our Self Paced Courses: Investment Banking & Private Equity Fundamentals HEREFixed Income Sales & Trading HERE Wealthfront.com/wss. This is a paid endorsement for Wealthfront. May not reflect others' experiences. Similar outcomes not guaranteed. Wealthfront Brokerage is not a bank. Rate subject to change. Promo terms apply. If eligible for the boosted rate of 4.15% offered in connection with this promo, the boosted rate is also subject to change if base rate decreases during the 3 month promo period.The Cash Account, which is not a deposit account, is offered by Wealthfront Brokerage LLC ("Wealthfront Brokerage"), Member FINRA/SIPC. Wealthfront Brokerage is not a bank. The Annual Percentage Yield ("APY") on cash deposits as of 11/7/25, is representative, requires no minimum, and may change at any time. The APY reflects the weighted average of deposit balances at participating Program Banks, which are not allocated equally. Wealthfront Brokerage sweeps cash balances to Program Banks, where they earn the variable APY. Sources HERE.
This is a recap of the top 10 posts on Hacker News on November 27, 2025. This podcast was generated by wondercraft.ai (00:30): Migrating the main Zig repository from GitHub to CodebergOriginal post: https://news.ycombinator.com/item?id=46064571&utm_source=wondercraft_ai(01:52): Penpot: The Open-Source FigmaOriginal post: https://news.ycombinator.com/item?id=46064757&utm_source=wondercraft_ai(03:14): Tell HN: Happy ThanksgivingOriginal post: https://news.ycombinator.com/item?id=46065955&utm_source=wondercraft_ai(04:36): Linux Kernel ExplorerOriginal post: https://news.ycombinator.com/item?id=46066280&utm_source=wondercraft_ai(05:58): DIY NAS: 2026 EditionOriginal post: https://news.ycombinator.com/item?id=46065034&utm_source=wondercraft_ai(07:20): AI CEO – Replace your boss before they replace youOriginal post: https://news.ycombinator.com/item?id=46072002&utm_source=wondercraft_ai(08:42): Same-day upstream Linux support for Snapdragon 8 Elite Gen 5Original post: https://news.ycombinator.com/item?id=46070668&utm_source=wondercraft_ai(10:04): We're losing our voice to LLMsOriginal post: https://news.ycombinator.com/item?id=46069771&utm_source=wondercraft_ai(11:26): TPUs vs. GPUs and why Google is positioned to win AI race in the long termOriginal post: https://news.ycombinator.com/item?id=46069048&utm_source=wondercraft_ai(12:48): The Nerd Reich – Silicon Valley Fascism and the War on DemocracyOriginal post: https://news.ycombinator.com/item?id=46066482&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Kony Kwong is the CEO and Co-Founder of GAIB AI, a pioneering platform transforming physical GPUs into a new yield-bearing asset class. As AI drives exponential demand for computational infrastructure, GAIB stands at the forefront of a new financial frontier — where AI and DeFi converge, to provide additional funding channels for cloud/data centers, while offering investors direct access to the explosive AI economy. With a background spanning quantitative trading, machine learning engineering at top-tier firms, and early bets on the AI + blockchain convergence, Kony brings a sharp operator-investor lens to one of the fastest-moving sectors in tech. In this episode, Kony unpacks WAGMI Ventures' thesis on why AI agents are the killer app for crypto in 2025 and beyond, how GAIB is transforming physical GPUs into a new yield-bearing asset class, designing autonomous onchain agents that actually own assets, trade, and compound value, and the unique model of simultaneously building and investing behind its own convictions. He dives deep into the technical and economic breakthroughs needed for truly agentic crypto systems, the massive alpha in AI economies, and how GAIB is positioning itself as building the first economic layer for AI compute, bringing new investment possibilities into this surging sector.
As Americans prep their Thanksgiving feasts, one hotline is bracing for its busiest day of the year. Nicole Johnson, director of the Butterball Turkey Talk-Line, explains the most common turkey questions. Then, Harvard professor Arthur Brooks shares advice for navigating family dynamics, handling holiday anxiety, and finding common ground at the dinner table. Plus, Nvidia says its GPUs are a generation ahead of Google's AI chips, and Campbell's Soup responds to leaked audio claiming its food is made for “poor people.” Arthur Brooks 13:34Nicole Johnson 21:14 In this episode:Nicole Johnson, @butterballArthur Brooks, @arthurbrooksBecky Quick, @BeckyQuickAndrew Ross Sorkin, @andrewrsorkinCameron Costa, @CameronCostaNY Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
In this episode, we break down Nvidia's bold statement that its latest GPUs outperform Google's AI chips by a full generation. We explore what this means for the AI hardware race and how it could shape future model development.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
We're told that AI progress is slowing down, that pre-training has hit a wall, that scaling laws are running out of road. Yet we're releasing this episode in the middle of a wild couple of weeks that saw GPT-5.1, GPT-5.1 Codex Max, fresh reasoning modes and long-running agents ship from OpenAI — on top of a flood of new frontier models elsewhere. To make sense of what's actually happening at the edge of the field, I sat down with someone who has literally helped define both of the major AI paradigms of our time.Łukasz Kaiser is one of the co-authors of “Attention Is All You Need,” the paper that introduced the Transformer architecture behind modern LLMs, and is now a leading research scientist at OpenAI working on reasoning models like those behind GPT-5.1. In this conversation, he explains why AI progress still looks like a smooth exponential curve from inside the labs, why pre-training is very much alive even as reinforcement-learning-based reasoning models take over the spotlight, how chain-of-thought actually works under the hood, and what it really means to “train the thinking process” with RL on verifiable domains like math, code and science. We talk about the messy reality of low-hanging fruit in engineering and data, the economics of GPUs and distillation, interpretability work on circuits and sparsity, and why the best frontier models can still be stumped by a logic puzzle from his five-year-old's math book.We also go deep into Łukasz's personal journey — from logic and games in Poland and France, to Ray Kurzweil's team, Google Brain and the inside story of the Transformer, to joining OpenAI and helping drive the shift from chatbots to genuine reasoning engines. Along the way we cover GPT-4 → GPT-5 → GPT-5.1, post-training and tone, GPT-5.1 Codex Max and long-running coding agents with compaction, alternative architectures beyond Transformers, whether foundation models will “eat” most agents and applications, what the translation industry can teach us about trust and human-in-the-loop, and why he thinks generalization, multimodal reasoning and robots in the home are where some of the most interesting challenges still lie.OpenAIWebsite - https://openai.comX/Twitter - https://x.com/OpenAIŁukasz KaiserLinkedIn - https://www.linkedin.com/in/lukaszkaiser/X/Twitter - https://x.com/lukaszkaiserFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold open and intro(01:29) – “AI slowdown” vs a wild week of new frontier models(08:03) – Low-hanging fruit: infra, RL training and better data(11:39) – What is a reasoning model, in plain language?(17:02) – Chain-of-thought and training the thinking process with RL(21:39) – Łukasz's path: from logic and France to Google and Kurzweil(24:20) – Inside the Transformer story and what “attention” really means(28:42) – From Google Brain to OpenAI: culture, scale and GPUs(32:49) – What's next for pre-training, GPUs and distillation(37:29) – Can we still understand these models? Circuits, sparsity and black boxes(39:42) – GPT-4 → GPT-5 → GPT-5.1: what actually changed(42:40) – Post-training, safety and teaching GPT-5.1 different tones(46:16) – How long should GPT-5.1 think? Reasoning tokens and jagged abilities(47:43) – The five-year-old's dot puzzle that still breaks frontier models(52:22) – Generalization, child-like learning and whether reasoning is enough(53:48) – Beyond Transformers: ARC, LeCun's ideas and multimodal bottlenecks(56:10) – GPT-5.1 Codex Max, long-running agents and compaction(1:00:06) – Will foundation models eat most apps? The translation analogy and trust(1:02:34) – What still needs to be solved, and where AI might go next
MRKT Matrix - Tuesday, November 25th Dow rallies nearly 700 points on increased hope for a December rate cut (CNBC) Consumer confidence hits lowest point since April as job worries grow (CNBC) Market Volatility Underscores Epic Buildup of Global Risk (NYTimes) Nvidia says its GPUs are a ‘generation ahead' of Google's AI chips (CNBC) Google, the Sleeping Giant in Global AI Race, Now ‘Fully Awake' (Bloomberg) OpenAI needs to raise at least $207bn by 2030 so it can continue to lose money, HSBC estimates (FT) Oracle-Linked Borrowing Binge Worries Lenders (The Information) Private Credit's Sketchy Marks Get Warning Shot From Wall Street's Top Cop (Bloomberg) --- Subscribe to our newsletter: https://riskreversalmedia.beehiiv.com/subscribe MRKT Matrix by RiskReversal Media is a daily AI powered podcast bringing you the top stories moving financial markets Story curation by RiskReversal, scripts by Perplexity Pro, voice by ElevenLabs
This week Qualcomm is back, and maybe everything is terrible with Arduino. Valve has been funding more Open Source work, and we're reading those tea leaves. Blender is out, AMD is writing code for their next-gen GPUs, and there's finally a remote access solution for Wayland. For tips, we have LibrePods for better AirPod support on Linux, paru for an easier time with the Arch User Repository, and the Zork snap to celebrate this newly Open-Sourced game from yesteryear. You can find the show notes at https://bit.ly/49uSNCy and have a great week! Host: Jonathan Bennett Co-Hosts: Jeff Massie and Rob Campbell Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Once again NVIDIA had a record earnings quarter (Q3FY26), but the strength of their on-going success will be dependent on many factors that may or may not be within their control. Let's explore those broader factors.SHOW: 978SHOW TRANSCRIPT: The Cloudcast #978 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW SPONSORS:[TestKube] TestKube is Kubernetes-native testing platform, orchestrating all your test tools, environments, and pipelines into scalable workflows empowering Continuous Testing. Check it out at TestKube.io/cloudcast[Mailtrap] Try Mailtrap for free[Interconnected] Interconnected is a new series from Equinix diving into the infrastructure that keeps our digital world running. With expert guests and real-world insights, we explore the systems driving AI, automation, quantum, and more. Just search “Interconnected by Equinix”.SHOW NOTES:NVIDIA Earnings (Q3FY2026 - November 2025)WHAT WILL BE THE NEW METRICS AND MILESTONES TO TRACK?Customer Revenues (e.g. CoreWeave, OpenAI)“Alternatives” Revenues (e.g. Google/TPUs, AMD, China, etc.)Customer Success Stories (%ROI, Business Differentiation, Business Acceleration)Growth of Data Centers (e.g. buildouts, zoning approvals, etc.)Electricity Buildouts (e.g. nuclear, coal, alternative, regulatory changes, municipality adoption)Accounting Deep-Dives into NVIDIA (not fraud, but days receivables, inventory buybacks, etc.)$500B in back orders (Oracle, Microsoft, OpenAI, GrokAI)FEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod
Register here to join Founder University Japan's kickoff: https://luma.com/cm0x90mkToday's show:Google and Meta had their cases dismissed (or received a slap on the wrist)… Despite all the backlash and cynicism, AI companies continue making bank and releasing hot new products… What does it all mean?For Jason Calacanis, the signs are pointing to a “major M&A moment,” with huge opportunities for increased efficiency and consolidation among America's favorite brands and largest companies?Who will it be? Join Jason and Alex for a round of hot speculation.PLUS why Jason thinks Michael Burry is both right and wrong about GPU depreciation, why NOTHING is certain about these OpenAI mega-deals, Google's Nano Banana Pro can make infographics and they're VERY impressive… and much more.Timestamps:(1:54) Jason's calling in from Vegas… He's doing a hot lap at F1!(3:18) How restaurants are becoming the new Hot IP(6:50) Founder University is heading to TOKYO!(9:27) Why Jason thinks the future of startups is truly global(10:06) Pipedrive - Bring your entire sales process into one elegant space. Get started with a 30 day free trial at pipedrive.com/twist(11:39) Nvidia killed it on the numbers… but what are the vibes around AI? Jason sounds off.(13:05) Why nothing is certain when it comes to the Nvidia/OpenAI deal(19:40) Is Google now WINNING consumer adoption of AI? How did it get this close?(19:57) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(26:07) Meanwhile, AI apps are still dominating the iOS Store(27:09) Why Jason and Alex think Michael Burry's both right and wrong about GPU depreciation(30:13) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(37:46) We're testing out Nano Banana Pro on a BBQ infographic challenge(43:42) What a week for AI models! It doesn't seem like things are slowing down…(46:12) Kalshi is growing fast, but can it catch Polymarket?(47:50) Is a rate cut coming? Jason and Alex read the tea leaves.(50:13) Why Jason predicts a “major M&A moment” in the next six months(52:09) VIEWER QUESTION: What should a software engineer be working on RIGHT NOW.(54:02) Founder Friday is now… STARTUP SUPPER CLUBSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(10:06) Pipedrive - Bring your entire sales process into one elegant space. Get started with a 30 day free trial at pipedrive.com/twist(19:57) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(30:13) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.com
In a world of Rust, Go, and Python, why does C++ still matter? Dr. Gabriel Dos Reis joins Scott to explain how C++ continues to shape everything from GPUs and browsers to AI infrastructure. They talk about performance, predictability, and the art of balancing power with safety...and how the language's constant evolution keeps it relevant four decades in.