Data Center Frontier Editor Rich Miller is your guide to how next-generation technologies are changing our world, and the critical role the data center industry plays in creating our extraordinary future.

As the data center industry enters the AI era in earnest, incremental upgrades are no longer enough. That was the central message of the Data Center Frontier Trends Summit 2025 session “AI Is the New Normal: Building the AI Factory for Power, Profit, and Scale,” where operators and infrastructure leaders made the case that AI is no longer a specialty workload; it is redefining the data center itself. Panelists described the AI factory as a new infrastructure archetype: purpose-built, power-intensive, liquid-cooled, and designed for constant change. Rack densities that once hovered in the low teens have now surged past 50 kilowatts and, in some cases, toward megawatt-scale configurations. Facilities designed for yesterday's assumptions simply cannot keep up. Ken Patchett of Lambda framed AI factories as inherently multi-density environments, capable of supporting everything from traditional enterprise racks to extreme GPU deployments within the same campus. These facilities are not replacements for conventional data centers, he noted, but essential additions; and they must be designed for rapid iteration as chip architectures evolve every few months. Wes Cummins of Applied Digital extended the conversation to campus scale and geography. AI demand is pushing developers toward tertiary markets where power is abundant but historically underutilized. Training and inference workloads now require hundreds of megawatts at single sites, delivered in timelines that have shrunk from years to little more than a year. Cost efficiency, ultra-low PUE, and flexible shells are becoming decisive competitive advantages. Liquid cooling emerged as a foundational requirement rather than an optimization. Patrick Pedroso of Equus Compute Solutions compared the shift to the automotive industry's move away from air-cooled engines. From rear-door heat exchangers to direct-to-chip and immersion systems, cooling strategies must now accommodate fluctuating AI workloads while enabling energy recovery—even at the edge. For Kenneth Moreano of Scott Data Center, the AI factory is as much a service model as a physical asset. By abstracting infrastructure complexity and controlling the full stack in-house, his company enables enterprise customers to move from AI experimentation to production at scale, without managing the underlying technical detail. Across the discussion, panelists agreed that the industry's traditional design and financing playbook is obsolete. AI infrastructure cannot be treated as a 25-year depreciable asset when hardware cycles move in months. Instead, data centers must be built as adaptable, elemental systems: capable of evolving as power, cooling, and compute requirements continue to shift. The session concluded with one obvious takeaway: AI is not a future state to prepare for. It is already shaping how data centers are built, where they are located, and how they generate value. The AI factory is no longer theoretical—and the industry is racing to build it fast enough.

As AI workloads push data center infrastructure in both centralized and distributed directions, the industry is rethinking where compute lives, how data moves, and who controls the networks in between. This episode captures highlights from The Distributed Data Frontier: Edge, Interconnection, and the Future of Digital Infrastructure, a panel discussion from the 2025 Data Center Frontier Trends Summit. Moderated by Scott Bergs of Dark Fiber and Infrastructure, the panel brought together leaders from DartPoints, 1623 Farnam, Duos Edge AI, ValorC3 Data Centers, and 365 Data Centers to examine how edge facilities, interconnection hubs, and regional data centers are adapting to rising power densities, AI inference workloads, and mounting connectivity constraints. Panelists discussed the rapid shift from legacy 4–6 kW rack designs to environments supporting 20–60 kW and beyond, while noting that many AI inference applications can be deployed effectively at moderate densities when paired with the right connectivity. Hospitals, regional enterprises, and public-sector use cases are emerging as key drivers of distributed AI infrastructure, particularly in tier 3 and tier 4 markets. The conversation also highlighted connectivity as a defining bottleneck. Permitting delays, middle-mile fiber constraints, and the need for early carrier engagement are increasingly shaping site selection and time-to-market outcomes. As data centers evolve into network-centric platforms, operators are balancing neutrality, fiber ownership, and long-term upgradability to ensure today's builds remain relevant in a rapidly changing AI landscape.

In this episode of the Data Center Frontier Show, DCF Editor in Chief Matt Vincent speaks with Uptime Institute research analyst Max Smolaks about the infrastructure forces reshaping AI data centers from power and racks to cooling, economics, and the question of whether the boom is sustainable. Smolaks unpacks a surprising on-ramp to today's AI buildout: former cryptocurrency mining operators that “discovered” underutilized pockets of power in nontraditional locations—and are now pivoting into AI campuses as GPU demand strains conventional markets. The conversation then turns to what OCP 2025 revealed about rack-scale AI: heavier, taller, more specialized racks; disaggregated “compute/power/network” rack groupings; and a white space that increasingly looks purpose-built for extreme density. From there, Vincent and Smolaks explore why liquid cooling is both inevitable and still resisted by many operators—along with the software, digital twins, CFD modeling, and new commissioning approaches emerging to manage the added complexity. On the power side, they discuss the industry's growing alignment around 800V DC distribution and what it signals about Nvidia's outsized influence on next-gen data center design. Finally, the conversation widens into load volatility and the economics of AI infrastructure: why “spiky” AI power profiles are driving changes in UPS systems and rack-level smoothing, and why long-term growth may hinge less on demand (which remains strong) than on whether AI profits broaden beyond a few major buyers—especially as GPU hardware depreciates far faster than the long-lived fiber built during past tech booms. A sharp, grounded look at the AI factory era—and the engineering and business realities behind the headlines.

In this Data Center Frontier Trends Summit 2025 session—moderated by Stu Dyer (CBRE) with panelists Aad den Elzen (Solar Turbines/Caterpillar), Creede Williams (Exigent Energy Partners), and Adam Michaelis (PointOne Data Centers)—the conversation centered on a hard truth of the AI buildout: power is now the limiting factor, and the grid isn't keeping pace. Dyer framed how quickly the market has escalated, from “big” 48MW campuses a decade ago to today's expectations of 500MW-to-gigawatt-scale capacity. With utility timelines stretched and interconnection uncertainty rising, the panel argued that natural gas has moved from taboo to toolkit—often the fastest route to firm power at meaningful scale. Williams, speaking from the IPP perspective, emphasized that speed-to-power requires firm fuel and financeable infrastructure, warning that “interruptible” gas or unclear supply economics can undermine both reliability and underwriting. Den Elzen noted that gas is already a proven solution across data center deployments, and in many cases is evolving from a “bridge” to a durable complement to the grid—especially when modular approaches improve resiliency and enable phased buildouts. Michaelis described how operators are building internal “power plant literacy,” hiring specialists and partnering with experienced power developers because data center teams can't assume they can self-perform generation projects. The panel also “de-mystified” key technology choices—reciprocating engines vs. turbines—as tradeoffs among lead time, footprint, ramp speed, fuel flexibility, efficiency, staffing, and long-term futureproofing. On AI-era operations, the group underscored that extreme load swings can't be handled by rotating generation alone, requiring system-level design with controls, batteries, capacitors, and close coordination with tenant load profiles. Audience questions pushed into public policy and perception: rate impacts, permitting, and the long-term mix of gas, grid, and emerging options like SMRs. The panel's consensus: behind-the-meter generation can help shield ratepayers from grid-upgrade costs, but permitting remains locally driven and politically sensitive—making industry communication and advocacy increasingly important. Bottom line: in the new data center reality, natural gas is here—often not as a perfect answer, but as the one that matches the industry's near-term demands for speed, scale, and firm power.

In this episode, we crack open the world of ILA (In-Line Amplifier) huts, those unassuming shelters are quietly powering fiber connectivity. Like mini utility substations of the fiber world, these small, secure, and distributed facilities keep internet, voice, and data networks running reliably, especially over long distances or in developing areas. From the analog roots of signal amplification to today's digital optical technologies, this conversation explores how ILAs are redefining long-haul fiber transport. We'll discuss how these compact, often rural, mini data centers are engineered and built to boost light signals across vast distances. But it's not just about the tech. There are real-world challenges to deploying ILAs: from acquiring land in varied environments, to coordinating civil construction often built in isolation. You'll learn why site selection is as much about geology and permitting as it is about signal loss, and what factors can make or break an ILA deployment. We also explore the growing role of hyperscalers and colocation providers in driving ILA expansion, adjacent revenue opportunities, and what ILA facilities can mean for the future of rural connectivity. Tune in to find out how the pulse of long-haul fiber is beating louder than ever.

In this panel session from the 2025 Data Center Frontier Trends Summit (Aug. 26-28) in Reston, Va., JLL's Sean Farney moderates a high-energy panel on how the industry is fast-tracking AI capacity in a world of power constraints, grid delays, and record-low vacancy. Under the banner “Scaling AI: The Role of Adaptive Reuse and Power-Rich Sites in GPU Deployment,” the discussion dives into why U.S. colocation vacancy is hovering near 2%, how power has become the ultimate limiter on AI revenue, and what it really takes to stand up GPU-heavy infrastructure at speed. Schneider Electric's Lovisa Tedestedt, Aligned Data Centers' Phill Lawson-Shanks, and Sapphire Gas Solutions' Scott Johns unpack the real-world strategies they're deploying today—from adaptive reuse of industrial sites and factory-built modular systems, to behind-the-fence natural gas, microgrids, and emerging hydrogen and RNG pathways. Along the way, they explore the coming “AI inference edge,” the rebirth of the enterprise data center, and how AI is already being used to optimize data center design and operations. During this talk, you'll learn: * Why record-low vacancy and long interconnection queues are reshaping AI deployment strategy. * How adaptive reuse of legacy industrial and commercial real estate can unlock gigawatt-scale capacity and community benefits. * The growing role of liquid cooling, modular skids, and grid-to-chip efficiency in getting more power to GPUs. * How behind-the-meter gas, virtual pipelines, and microgrids are bridging multi-year grid delays. * Why many experts expect a renaissance of enterprise data centers for AI inference at the edge. Moderator: Sean Farney, VP, Data Centers, Jones Lang LaSalle (JLL) Panelists: Tony Grayson, General Manager, Northstar Lovisa Tedestedt, Strategic Account Executive – Cloud & Service Providers, Schneider Electric Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers Scott Johns, Chief Commercial Officer, Sapphire Gas Solutions

Recorded live at the 2025 Data Center Frontier Trends Summit in Reston, VA, this panel brings together leading voices from the utility, IPP, and data center worlds to tackle one of the defining issues of the AI era: power. Moderated by Buddy Rizer, Executive Director of Economic Development for Loudoun County, the session features: Jeff Barber, VP Global Data Centers, Bloom Energy Bob Kinscherf, VP National Accounts, Constellation Stan Blackwell, Director, Data Center Practice, Dominion Energy Joel Jansen, SVP Regulated Commercial Operations, American Electric Power David McCall, VP of Innovation, QTS Data Centers Together they explore how hyperscale and AI workloads are stressing today's grid, why transmission has become the critical bottleneck, and how on-site and behind-the-meter solutions are evolving from “bridge power” into strategic infrastructure. The panel dives into the role of gas-fired generation and fuel cells, emerging options like SMRs and geothermal, the realities of demand response and curtailment, and what it will take to recruit the next generation of engineers into this rapidly changing ecosystem. If you want a grounded, candid look at how energy providers and data center operators are working together to unlock new capacity for AI campuses, this conversation is a must-listen.

Live from the Data Center Frontier Trends Summit 2025 – Reston, VA In this episode, we bring you a featured panel from the Data Center Frontier Trends Summit 2025 (Aug. 26-28), sponsored by Schneider Electric. DCF Editor in Chief Matt Vincent moderates a fast-paced, highly practical conversation on what “AI for good” really looks like inside the modern data center—both in how we build for AI workloads and how we use AI to run facilities more intelligently. Expert panelists included: Steve Carlini, VP, Innovation and Data Center Energy Management Business, Schneider Electric Sudhir Kalra, Chief Data Center Operations Officer, Compass Datacenters Andrew Whitmore, VP of Sales, Motivair Together they unpack: How AI is driving unprecedented scale—from megawatt data halls to gigawatt AI “factories” and 100–600 kW rack roadmaps What Schneider and NVIDIA are learning from real-world testing of Blackwell and NVL72-class reference designs Why liquid cooling is no longer optional for high-density AI, and how to retrofit thousands of brownfield, air-cooled sites How Compass is using AI, predictive analytics, and condition-based maintenance to cut manual interventions and OPEX The shift from “constructing” to assembling data centers via modular, prefab approaches The role of AI in grid-aware operations, energy storage, and more sustainable build and operations practices Where power architectures, 800V DC, and industry standards will take us over the next five years If you want a grounded, operator-level view into how AI is reshaping data center design, cooling, power, and operations—beyond the hype—this DCF Trends Summit session is a must-listen.

On this episode of The Data Center Frontier Show, Editor in Chief Matt Vincent sits down with Rob Campbell, President of Flex Communications, Enterprise & Cloud, and Chris Butler, President of Flex Power, to unpack Flex's bold new integrated data center platform as unveiled at the 2025 OCP Global Summit. Flex says the AI era has broken traditional data center models, pushing power, cooling, and compute to the point where they can no longer be engineered separately. Their answer is a globally manufactured, pre-engineered platform that unifies these components into modular pods and skids, designed to cut deployment timelines by up to 30 percent and support gigawatt-scale AI campuses. Rob and Chris explain how Flex is blending JetCool's chip-level liquid cooling with scalable rack-level CDUs; how higher-voltage DC architectures (400V today, 800V next) will reshape power delivery; and why Flex's 110-site global manufacturing footprint gives it a unique advantage in speed and resilience. They also explore Flex's lifecycle intelligence strategy, the company's circular-economy approach to modular design, and their view of the “data center of 2030”—a landscape defined by converged power and IT, liquid cooling as default, and modular units capable of being deployed in 30–60 days. It's a deep look at how one of the world's largest manufacturers plans to redefine AI-scale infrastructure.

Artificial intelligence is completely changing how data centers are built and operated. What used to be relatively stable IT environments are now turning into massive power ecosystems. The main reason is simple — AI workloads need far more computing power, and that means far more energy. We're already seeing a sharp rise in total power consumption across the industry, but what's even more striking is how much power is packed into each rack. Not long ago, most racks were designed for 5 to 15 kilowatts. Today, AI-heavy setups are hitting 50 to 70 kW, and the next generation could reach up to 1 megawatt per rack. That's a huge jump — and it's forcing everyone in the industry to rethink power delivery, cooling, and overall site design. At those levels, traditional AC power distribution starts to reach its limits. That's why many experts are already discussing a move toward high-voltage DC systems, possibly around 800 volts. DC systems can reduce conversion losses and handle higher densities more efficiently, which makes them a serious option for the future. But with all this growth comes a big question: how do we stay responsible? Data centers are quickly becoming some of the largest power users on the planet. Society is starting to pay attention, and communities near these sites are asking fair questions — where will all this power come from, and how will it affect the grid or the environment? Building ever-bigger data centers isn't enough; we need to make sure they're sustainable and accepted by the public. The next challenge is feasibility. Supplying hundreds of megawatts to a single facility is no small task. In many regions, grid capacity is already stretched, and new connections take years to approve. Add the unpredictable nature of AI power spikes, and you've got a real engineering and planning problem on your hands. The only realistic path forward is to make data centers more flexible — to let them pull energy from different sources, balance loads dynamically, and even generate some of their own power on-site. That's where ComAp's systems come in. We help data center operators manage this complexity by making it simple to connect and control multiple energy sources — from renewables like solar or wind, to backup generators, to grid-scale connections. Our control systems allow operators to build hybrid setups that can adapt in real time, reduce emissions, and still keep reliability at 100%. Just as importantly, ComAp helps with the grid integration side. When a single data center can draw as much power as a small city, it's no longer just a “consumer” — it becomes part of the grid ecosystem. Our technology helps make that relationship smoother, allowing these large sites to interact intelligently with utilities and maintain overall grid stability. And while today's discussion is mostly around AC power, ComAp is already ready for the DC future. The same principles and reliability that have powered AC systems for decades will carry over to DC-based data centers. We've built our solutions to be flexible enough for that transition — so operators don't have to wait for the technology to catch up. In short, AI is driving a complete rethink of how data centers are powered. The demand and density will keep rising, and the pressure to stay responsible and sustainable will only grow stronger. The operators who succeed will be those who find smart ways to integrate different energy sources, keep efficiency high, and plan for the next generation of infrastructure. That's the space where ComAp is making a real difference.

In this episode of the DCF Show podcast, Data Center Frontier Editor in Chief Matt Vincent sits down with Bill Severn, CEO of 1623 Farnam, to explore how the Omaha carrier hotel is becoming a critical aggregation hub for AI, cloud, and regional edge growth. A featured speaker on The Distributed Data Frontier panel at the 2025 DCF Trends Summit, Severn frames the edge not as a location but as the convergence of eyeballs, network density, and content—a definition that underpins Farnam's strategy and rise in the Midwest. Since acquiring the facility in 2018, 1623 Farnam has transformed an underappreciated office tower on the 41st parallel into a thriving interconnection nexus with more than 40 broadband providers, 60+ carriers, and growing hyperscale presence. The AI era is accelerating that momentum: over 5,000 new fiber strands are being added into the building, with another 5,000 strands expanding Meet-Me Room capacity in 2025 alone. Severn remains bullish on interconnection for the next several years as hyperscalers plan deployments out to 2029 and beyond. The conversation also dives into multi-cloud routing needs across the region—where enterprises increasingly rely on Farnam for direct access to Google Central, Microsoft ExpressRoute, and global application-specific cloud regions. Energy efficiency has become a meaningful differentiator as well, with the facility operating below a 1.5 PUE, thanks to renewable chilled water, closed-loop cooling, and extensive free cooling cycles. Severn highlights a growing emphasis on strategic content partnerships that help CDNs and providers justify regional expansion, pointing to past co-investments that rapidly scaled traffic from 100G to more than 600 Gbps. Meanwhile, AI deployments are already arriving at pace, requiring collaborative engineering to fit cabinet weight, elevator limitations, and 40–50 kW rack densities within a non–purpose-built structure. As AI adoption accelerates and interconnection demand surges across the heartland, 1623 Farnam is positioning itself as one of the Midwest's most important digital crossroads—linking hyperscale backbones, cloud onramps, and emerging AI inference clusters into a cohesive regional edge.

In this episode, Matt Vincent, Editor in Chief at Data Frontier is joined by Rob Macchi, Vice President Data Center Solutions at Wesco and they explore how companies can stay ahead of the curve with smarter, more resilient construction strategies. From site selection to integrating emerging technologies, Wesco helps organizations build data centers that are not only efficient but future-ready. Listen now to learn more!

In this episode of the Data Center Frontier Show, we sit down with Ryan Mallory, the newly appointed CEO of Flexential, following a coordinated leadership transition in October from Chris Downie. Mallory outlines Flexential's strategic focus on the AI-driven future, positioning the company at the critical "inference edge" where enterprise CPU meets AI GPU. He breaks down the AI infrastructure boom into a clear three-stage build cycle and explains why the enterprise "killer app"—Agentic AI—plays directly into Flexential's strengths in interconnection and multi-tenant solutions. We also dive into: Power Strategy: How Flexential's modular, 36-72 MW build strategy avoids community strain and wins utility favor. Product Roadmap: The evolution to Gen 5 and Gen 6 data centers, blending air and liquid cooling for mixed-density AI workloads. The Bold Bet: Mallory's vision for the next 2-3 years, which involves "bending the physics curve" with geospatial energy and transmission to overcome terrestrial limits. Tune in for a insightful conversation on power, planning, and the future of data center infrastructure.

On this episode of the Data Center Frontier Show, DartPoints CEO Scott Willis joins Editor in Chief Matt Vincent to discuss why regional data centers are becoming central to the future of AI and digital infrastructure. Fresh off his appearance on the Distributed Edge panel at the 2025 DCF Trends Summit, Willis breaks down how DartPoints is positioning itself in non-tier-one markets across the Midwest, Southeast, and South Central regions—locations he believes will play an increasingly critical role as AI workloads move closer to users. Willis explains that DartPoints' strategy hinges on a deeply interconnected regional footprint built around carrier-rich facilities and strong fiber connectivity. This fabric is already supporting latency-sensitive workloads such as AI inference and specialized healthcare applications, and Willis expects that demand to accelerate as enterprises seek performance closer to population centers. Following a recent recapitalization with NOVA Infrastructure and Orion Infrastructure Capital, DartPoints has launched four new expansion sites designed from the ground up for higher-density, AI-oriented workloads. These facilities target rack densities from 30 kW to 120 kW and are sized in the 10–50 MW range—large enough for meaningful HPC and AI deployments but nimble enough to move faster than hyperscale builds constrained by long power queues. Speed to market is a defining advantage for DartPoints. Willis emphasizes the company's focus on brownfield opportunities where utility infrastructure already exists, reducing deployment timelines dramatically. For cooling, DartPoints is designing flexible environments that leverage advanced air systems for 30–40 kW racks and liquid cooling for higher densities, ensuring the ability to support the full spectrum of enterprise, HPC, and edge-adjacent AI needs. Willis also highlights the importance of community partnership. DartPoints' facilities have smaller footprints and lower power impact than hyperscale campuses, allowing the company to serve as a local economic catalyst while minimizing noise and aesthetic concerns. Looking ahead to 2026, Willis sees the industry entering a phase where AI demand becomes broader and more distributed, making regional markets indispensable. DartPoints plans to continue expanding through organic growth and targeted M&A while maintaining its focus on interconnection, high-density readiness, and rapid, community-aligned deployment. Tune in to hear how DartPoints is shaping the next chapter of distributed digital infrastructure—and why the market is finally moving toward the regional edge model Willis has championed.

In this episode of the Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent speaks with Ed Nichols, President and CEO of Expanse Energy / RRPT Hydro, and Gregory Tarver, Chief Electrical Engineer, about a new kind of hydropower built for the AI era. RRPT Hydro's piston-driven gravity and buoyancy system generates electricity without dams or flowing rivers—using the downward pull of gravity and the upward lift of buoyancy in sealed cylinders. Once started, the system runs self-sufficiently, producing predictable, zero-emission power. Designed for modular, scalable deployment—from 15 kW to 1 GW—the technology can be installed underground or above ground, enabling data centers to power themselves behind the meter while reducing grid strain and even selling excess energy back to communities. At an estimated Levelized Cost of Energy of $3.50/MWh, RRPT Hydro could dramatically undercut traditional renewables and fossil power. The company is advancing toward commercial readiness (TRL 7–9) and aims to build a 1 MW pilot plant within 12–15 months. Nichols and Tarver describe this moonshot innovation, introduced at the 2025 DCF Trends Summit, as a “Wright Brothers moment” for hydropower—one that could redefine sustainable baseload energy for data centers and beyond. Listen now to explore how RRPT Hydro's patented piston-driven system could reshape the physics, economics, and deployment model of clean energy.

At this year's Data Center Frontier Trends Summit, Honghai Song, founder of Canyon Magnet Energy, presented his company's breakthrough superconducting magnet technology during the “6 Moonshot Trends for the 2026 Data Center Frontier” panel—showcasing how high-temperature superconductors (HTS) could reshape both fusion energy and AI data-center power systems. In this episode of the Data Center Frontier Show, Editor in Chief Matt Vincent speaks with Song about how Canyon Magnet Energy—founded in 2023 and based in New Jersey and Stony Brook University—is bridging fusion research and AI infrastructure through next-generation magnet and energy-storage technology. Song explains how HTS magnets, made from REBCO (Rare Earth Barium Copper Oxide), operate at 77 Kelvin with zero electrical resistance, opening the door to new kinds of super-efficient power transmission, storage, and distribution. The company's SMASH (Superconducting Magnetic Storage Hybrid) system is designed to deliver instant bursts of energy—within milliseconds—to stabilize GPU-driven AI workloads that traditional batteries and grids can't respond to fast enough. Canyon Magnet Energy is currently developing small-scale demonstration projects pairing SMES systems with AI racks, exploring integration with DC power architectures and liquid-cooling infrastructure. The long-term roadmap envisions multi-mile superconducting DC lines connecting renewables to data centers—and ultimately, fusion power plants providing virtually unlimited clean energy. Supported by an NG Accelerate grant from New Jersey, the company is now seeking data-center partners and investors to bring these technologies from the lab into the field.

Who is Packet Power? Since 2008, Packet Power has been at the forefront of energy and environmental monitoring, pioneering wireless solutions that helped define the modern Internet of Things (IoT). Built on the belief that energy is the new cost frontier of computation, Packet Power enables organizations to understand exactly where, when, and how energy is used—and at what cost. As AI-driven workloads push energy demand to record levels, Packet Power's mission of complete energy traceability has never been more critical. Their systems are trusted worldwide for providing secure, out-of-band monitoring that remains fully independent of operational data networks. Introducing the All-New High-Density Power Monitor Packet Power's newest innovation, the High-Density Power Monitor, is redefining what's possible in energy monitoring. At just under 6 cubic inches, it's the smallest and most scalable multi-circuit power monitoring system on the market, capable of tracking 120 circuits in a space smaller than what's inside a standard light switch. The High-Density Power Monitor eliminates bulky hardware, complex wiring, and lengthy installations. It's plug-and-play simple, seamlessly integrates with Packet Power's EMX software or any third-party monitoring platform, and supports both wired and wireless connectivity—including secure, air-gapped environments. Solving the Challenges of Modern Power Monitoring The High-Density Power Monitor is engineered for the next generation of high-performance systems and facilities. It tackles five key challenges: Power Density: Monitors high-load environments with unmatched precision. Circuit Density: Tracks more circuits per module than any competitor. Physical Density: Fits anywhere, from PDUs to sub-panels to embedded devices. Installation Simplicity: Snaps into place—no tools, no complexity. Connection Flexibility: Wireless, wired, LAN, cloud, or cellular—you can mix and match freely. Whether managing a single rack or thousands of devices, Packet Power ensures monitoring 1 device is as easy as monitoring 1,000. Why It Matters Now Today's computing environments are experiencing an energy density arms race—with systems consuming megawatts of power in a single cabinet. New cooling methods, extreme power densities, and evolving form factors demand monitoring solutions that can keep up. Packet Power's new High-Density Power Monitor meets that challenge head-on, offering the scalability, adaptability, and visibility needed to manage energy use in the AI era. Perfect for Any Application This solution is ideal for: High-density servers and compute cabinets Distribution panels, PDUs, and busway components Embedded monitoring in OEM systems Large-scale deployments requiring fleet-level simplicity + more! Whether new installations or retrofitting existing buildings, Packet Power systems deliver vendor-agnostic integration and proven scalability with unmatched turn times and products Made in the USA for BABA compliance. Learn More! Discover the true meaning of small & mighty:

In this episode of The Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent talks with Yuval Boger, Chief Commercial Officer at QuEra Computing, about the fast-evolving intersection of quantum and AI-accelerated supercomputing. QuEra, a Boston-based pioneer in neutral-atom quantum computers, recently expanded its $230 million funding round with new investment from NVentures (NVIDIA's venture arm) and announced a Nature-published breakthrough in algorithmic fault tolerance that dramatically cuts runtime overhead for error-corrected quantum algorithms. Boger explains how QuEra's systems, operating at room temperature and using identical rubidium atoms as qubits, offer scalable, power-efficient performance for HPC and cloud environments. He details the company's collaborations with NVIDIA, AWS, and global supercomputing centers integrating quantum processors alongside GPUs, and outlines why neutral-atom architectures could soon deliver practical, fault-tolerant quantum advantage. Listen as Boger discusses QuEra's technology roadmap, market position, and the coming inflection point where hybrid quantum-classical systems move from the lab into the data center mainstream.

Matt Vincent, Editor-in-Chief of Data Center Frontier, sits down with Angela Capon, Vice President of Marketing at EdgeConneX, to discuss the groundbreaking collaboration between EdgeConneX and the Duke of Edinburgh's International Award Program.

Charting the Future of AI Storage Infrastructure In this episode, Solidigm Director of Strategic Planning Brian Jacobosky guides listeners through a tech-forward conversation on how storage infrastructure is helping redefine the AI-era data center. The discussion frames storage as more than just a cost factor; it's also a strategic building block for performance, efficiency, and savings. Storage Moves to the Center of AI Data Infrastructure Jacobosky explains how, in the AI-driven era, storage is being elevated from a forgotten metric like “dollars per gigabyte” to a core priority: maximizing GPU utilization, managing soaring power draw, and unlocking space savings. He illustrates how every watt and every square inch counts. As GPU compute scales dramatically, storage efficiency is being engineered to enable maximum density and throughput. High-Capacity SSDs as a Game-Changer Jacobosky spotlights Solidigm D5-P5336 122TB SSDs as emblematic of the shift. Rather than a simple technical refresh, these drives represent a tectonic realignment in how data centers are being designed for huge capacity and optimized performance. With all-flash deployments offering up to nine times the space savings compared to hybrid architectures, Jacobosky underscores how SSD density can enable more GPU scale within fixed power and space budgets. This could even unlock achieving a 1‑petabyte SSD by the end of the decade. Embedded Efficiency The episode brings environmental considerations to the forefront. Jacobosky shares how an “all‑SSD” strategy can dramatically slash physical footprints as well as energy consumption. From data center buildout through end of lifecycle drive retirement, efficiency is driving both operational cost savings and ESG benefits — helping reduce concrete and steel usage, power draw, and e‑waste. Pioneering Storage Architectures and Cooling Innovation Listeners learn how AI-first innovators like Neo Cloud-style providers and sovereign AI operators lead the charge in deploying next-generation storage. Jacobosky also previews the Solidigm PS-1010 E1.S form factor, an NVIDIA fanless server solution that enables direct‑to‑chip Cold-Plate-Cooled SSDs integrated into GPU servers. He predicts that this systems-level integration will become a standard for high-density AI infrastructure. Storage as a Strategic Investment Solidigm challenges the notion that high-capacity storage is cost prohibitive. Within the framework of the AI token economy, Jacobosky explains that the true measure becomes minimizing cost per token and time to first token and, when storage is optimized for performance, capacity, and efficiency, the total cost of ownership (TCO) will often prove favorable after the first evaluation. Looking Ahead: Memory Wall, Inference Workloads, Liquid Cooling Jacobosky ends with a look ahead to where storage innovation will lead in the next five years. As AI models grow in size and complexity, he argues, storage is increasingly acting as an extension of memory, breaking through the “memory wall” for large inference workloads. Companies will design infrastructure from the ground up with liquid-cooling, future-scalable storage, and storage that supports massive model deployments without compromising latency. This episode is essential listening for data center architects, AI infrastructure strategists, and sustainability leaders looking to understand how storage is fast-becoming a defining factor in AI-ready data centers of the future.

Florida is emerging as one of the most promising new frontiers for data center growth — combining power availability, policy alignment, and strategic geography in ways that mirror the early success of Northern Virginia. In this episode of The Data Center Frontier Show, Editor-in-Chief Matt Vincent sits down with Buddy Rizer, Executive Director of Loudoun County Economic Development, and Lila Jaber, Founder of the Florida's Women in Energy Leadership Forum and former Chair of the Florida Public Service Commission. Together, they explore how Florida is building the foundation for large-scale digital infrastructure and AI data center investment. Episode Highlights: Energy Advantage: While Loudoun County faces a 600-megawatt deficit and rising demand, Florida enjoys excess generation capacity, proactive utilities, and growing renewable integration. Utilities like FPL and Duke Energy are preparing for hyperscale and AI-driven loads with new tariff structures and grid-hardening investments. Tax Incentives & Workforce: Florida's extended data center sales tax exemption through 2037 and its raised 100-megawatt IT load threshold signal a commitment to hyperscale development. The state's universities and workforce programs are aligned with this tech growth, producing top talent in engineering and applied sciences. Strategic Location: As a digital gateway to Latin America and the Caribbean, Florida's connectivity advantage—especially around Miami—is attracting hyperscale and AI operators looking to expand globally. Market Outlook: Industry insiders predict that within the next year, a major data center player will establish a significant footprint in Florida. Multiple campuses are expected to follow, driven by the state's power resilience, policy stability, and collaborative approach between utilities, developers, and government leaders. Why It Matters: Florida's combination of energy abundance, policy foresight, and strategic geography positions it as the next great growth market for digital infrastructure and AI-ready data centers in North America.

This podcast explores the rapidly evolving thermal and water challenges facing today's data centers as AI workloads push rack densities to unprecedented levels. The discussion highlights the risks and opportunities tied to liquid cooling—from pre-commissioning practices and real-time monitoring to system integration and water stewardship. Ecolab's innovative approaches to thermal management can not only solve operational constraints but also deliver competitive advantage by improving efficiency, reducing resource consumption, and strengthening sustainability commitments.

Join Bill Tierney of The Data Center Construction Alliance, as he discusses some of the emerging challenges facing data center development today. Topics will include how increasing collaboration between OEMs, owners, contractors, and sub-contractors is leading to some exciting and innovative solutions in the design and construction of data centers. He will also share some examples of how collaboration has led to new ideas and methodologies in the field.

AI networks are driving dramatic changes in data center design, especially around power, cooling, and connectivity. Modern GPU-powered AI data centers require far more energy and generate much more heat than traditional CPU-based setups, pushing cabinets to new power densities and necessitating advanced cooling solutions like liquid direct-to-chip cooling. These environments also demand significantly more fiber cabling to handle increased data flows, with deeper cabinets and complex layouts that make traditional rear-access cabling impractical.

In this DCF Trends-Nomads at the Summit Podcast episode, the hosts from Data Center Frontier and Nomad Futurist sit down with Adrienne Pierce, CEO of New Sun Road, to explore the emerging frontier of sovereign and renewable energy solutions for modular data center deployment. With over 1,500 microgrids under management via the company's Stellar platform, Pierce brings a field-tested perspective on how flexible, AI-driven energy controls can empower edge and sub-10 MW data center systems—especially in regions where traditional grid infrastructure can't keep up with AI-era demands. This discussion dives into the real-world opportunities for modular, microgrid-powered data centers to unlock new markets, reduce energy costs, and create more resilient and autonomous compute infrastructure at the edge and beyond. Expect sharp insights into what it means to decouple data center growth from utility bottlenecks—and how the right energy intelligence can accelerate both sustainability and scalability.

In this DCF Trends-Nomads at the Summit Podcast episode, the hosts of Data Center Frontier and Nomad Futurist sit down with UVA Darden MBA candidates Tosin Fashola and Albert Odum for an energizing conversation about next-generation data infrastructure—and why they believe Africa is poised to be its future epicenter. With professional backgrounds spanning data center strategy at KPMG and government-led implementations in Ghana, Tosin and Albert bring fresh, globally-minded perspectives on AI infrastructure, regional power strategy, and the role of connectivity in economic transformation. Expect a wide-ranging dialogue on the untapped potential of African markets, the roadmap to building sovereign cloud capacity and IXPs, and how a new generation of leaders is preparing to close the global digital divide—one hyperscale project at a time.

In this DCF Trends-Nomads at the Summit Podcast episode, Data Center Frontier editors and Nomad Futurist hosts sit down with Greg Stover, Vertiv's Global Director, Hi-Tech Development. The discussion delves into Stover's work at the intersection of advanced cooling technologies, hyperscale growth, and AI-driven infrastructure design. Drawing on his experience guiding Vertiv's strategy for high-density deployments, liquid cooling adoption, and close collaboration with hyperscalers and chipmakers, Stover offers a forward-looking perspective on how evolving compute architectures, thermal management innovations, and market forces are redefining the competitive edge in the data center industry.

In this DCF Trends-Nomads at the Summit Podcast episode, the ever-curious, future-focused podcast hosts from Data Center Frontier and Nomad Futurist reunite with Infrastructure Masons CEO Santiago Suinaga for a timely, in-depth follow-up to his impactful debut on the DCF Show. With AI infrastructure growth hitting warp speed, the conversation will dig deeper into Suinaga's vision for how the digital infrastructure community can scale responsibly—without losing sight of net zero goals, workforce development, or supply chain accountability. Expect a candid, high-level exchange on emerging regulatory pressures, the embodied carbon challenge, and why flexible cooling and modular design must be table stakes for the AI-powered data center of the future. Suinaga will also share the latest on iMasons' Climate Accord, job-matching platform, and new cross-sector partnerships—all aimed at fostering sustainability, equity, and innovation in an industry racing to keep pace with exponential demand.

In this DCF Trends-Nomads at the Summit Podcast episode, Chris James, CEO of NoesisAI, delivers a sweeping, insight-rich overview of how different classes of AI models—from LLMs and RAG to vision AI and scientific workloads—are driving a new wave of infrastructure decisions across the data center landscape. With a sharp focus on the diverging needs of training vs. inference, James breaks down what it takes to support today's AI—from GPU-intensive clusters with high-speed interconnects and liquid cooling to inference-optimized, edge-deployed accelerators. He also explores the rapidly shifting hardware ecosystem, including the rise of custom silicon, heterogeneous computing, and where the battle between NVIDIA, AMD, Intel, and hyperscaler-designed chips is headed. Whether you're designing for scalability, sustainability, or the bleeding edge, this conversation offers a field guide to the infrastructure behind intelligent computing.

In this DCF Trends-Nomads at the Summit Podcast episode, the Data Center Frontier and Nomad Futurist hosts engage in a dynamic, behind-the-scenes conversation with two of the most influential voices shaping digital infrastructure communications: Illisa Miller, founder of iMiller PR, and Adam Waitkunas, founder of Milldam PR. With decades of experience guiding some of the industry's most prominent brands through launches, crises, and rebranding efforts, Miller and Waitkunas offer an unfiltered look at what it really takes to cut through the noise in a crowded, technically complex market. From telling the right story about AI and sustainability, to building trust across hyperscalers, investors, and public stakeholders, this episode explores the evolving narrative demands of the data center space—and why strategic communications is now mission-critical to business success. Expect honest reflections, practical PR wisdom, and a few war stories from the front lines of digital infrastructure storytelling.

In this DCF Trends-Nomads at the Summit Podcast episode, the editors of Data Center Frontier and the hosts of Nomad Futurist sit down with Lovisa Tedestedt, Sales Executive at Schneider Electric, where she focuses on colo acquisition accounts. With more than 25 years of experience in international sales management, business development, and leadership, Lovisa has built a career defined by strong client relationships, bold growth strategies, and a passion for delivering excellence. From Sweden to China, Europe to the U.S., Lovisa brings a truly global perspective to the data center industry. In this conversation, she shares insights on strategic planning, high-stakes negotiations, and the importance of adaptability in today's fast-changing market. Beyond her career, Lovisa talks about life outside of work as an avid hockey mom, now based in Des Moines, Iowa with her husband and two teenage children. Join us for a conversation that blends global business lessons, sales leadership, and the personal side of a career in the digital infrastructure world.

In this DCF Trends-Nomads at the Summit Podcast episode, the editors of Data Center Frontier and the hosts of Nomad Futurist sit down with Doug Recker, a telecommunications veteran and edge data center pioneer with more than 30 years of industry leadership. Today, Recker leads Duos Edge AI, driving initiatives to bring multi-access edge data centers (EDCs) to underserved communities, including schools and health facilities across the U.S. From founding Edge Presence (acquired by Ubiquity in 2023) and Colo5 Data Centers (later acquired by Cologix in 2014), to deploying more than 40 TM2500 units worldwide, Recker has consistently been at the forefront of building scalable, resilient infrastructure. His career is marked by multiple honors, including Northeast Florida's Ultimate CEO Award and recognition among Inc. 500's fastest-growing companies. In this conversation, Recker shares insights on the evolution of edge computing, lessons learned from decades in telecom and data centers, and how his time in the U.S. Marine Corps shaped his leadership philosophy. Tune in for a wide-ranging discussion on innovation, resilience, and the future of edge AI.

In this DCF Trends-Nomads at the Summit Podcast episode, Matt Grandbois, Vice President at AirJoule, introduces a game-changing approach to one of the data center industry's most pressing challenges: water sustainability. As power-hungry, high-density environments collide with growing water scarcity concerns, Grandbois lays out a compelling vision for water-positive data centers—facilities that produce more water than they consume. Leveraging AirJoule's advanced atmospheric water harvesting technology, he explains how waste heat, typically seen as a problem to mitigate, can become a valuable resource for onsite water generation. From adiabatic cooling and humidification to local water replenishment, this conversation opens up new possibilities for sustainable design, reduced PUE, and location flexibility—redefining what it means for data centers to be responsible community partners.

Speakers: Mike Klassen, Director of Business Development, ZincFive Sugam Patel, VP of Product Management, DG Matrix In this DCF Trends-Nomads at the Summit Podcast episode, experts from ZincFive and DG Matrix unpack how medium voltage (MV) UPS architectures are redefining the way data centers power up for AI. As AI densification pushes traditional infrastructure to its limits, MV UPS solutions offer a path forward—boosting efficiency, reducing heat and losses, and reclaiming floor space for compute. The conversation delves into how higher voltage translates into smarter, more scalable designs that not only meet the demands of today's high-performance AI workloads but also future-proof facilities for what's coming next. From design frameworks to deployment strategies, Klassen and Patel provide a grounded, technical look at the UPS shift already underway.

In this episode of the DCF Trends–Nomads at the Summit podcast, we bring together two dynamic voices shaping the future of digital infrastructure: Melissa Farney, Editor at Large for Data Center Frontier and board member of the Nomad Futurist Foundation, and Bill Kleyman, Contributing Editor for Data Center Frontier and CEO of Apolo, who also serves as a member of the Nomad Futurist Foundation. Melissa and Bill join up for a candid discussion on the biggest trends transforming the data center and digital ecosystem. From AI-driven growth and sustainability challenges to the human capital needed to sustain the industry's rapid expansion, they share a unique blend of editorial perspective and executive experience. This episode also dives into the mission of the Nomad Futurist Foundation: inspiring and equipping the next generation of leaders in the digital infrastructure space. Listeners will gain insights not just into market shifts, but also into the values and vision shaping the future of the field. Tune in for an engaging conversation at the intersection of thought leadership, industry transformation, and the mission to build a more resilient, inclusive digital future.

Speakers: Joseph Ford, Senior Associate – Technology, Bala Consulting Engineers Eric Klaiber, Data Center Design Manager, Bala Consulting Engineers In this DCF Trends-Nomads at the Summit Podcast episode, Joseph Ford and Eric Klaiber of Bala Consulting Engineers offer a consultant engineer's hard-won perspective on the complex realities of designing infrastructure for hyperscale, MTDC, and wholesale data centers. Drawing on years of field experience, they dig into the nuanced choreography required to align incoming duct banks, meet-me room layouts, and overlapping network systems—all while staying within the spatial constraints driven by power and cooling demands. This candid conversation highlights what it really takes to create design harmony across client expectations, design teams, and contractors, with insights into space planning, coordination strategy, and the delicate balance of infrastructure coexistence that underpins modern high-performance facilities.

In this DCF Trends-Nomads at the Summit Podcast episode, Data Center Frontier and Nomad Futurist hosts sit down with Bob Cassiliano, Chairman & CEO of 7x24 Exchange International, for a wide-ranging conversation on the state of mission-critical infrastructure and the evolving challenges facing the data center industry. As the leader of one of the most influential organizations in the space, Cassiliano offers a national perspective on power constraints, workforce development, sustainability pressures, and the cultural shifts reshaping operations and leadership across the digital infrastructure landscape. The discussion also highlights how 7x24 Exchange continues to serve as a vital convening force for collaboration, education, and resilience in an industry tasked with powering the AI era. With decades of insight and a pulse on what's next, Cassiliano shares where the data center sector must go to meet the moment.

AI has pushed liquid cooling from a niche technology to a critical requirement for high density data centers. In this episode, Pat McGinn, COO and President of CoolIT Systems, shares why AI is driving liquid cooling from optional to essential. He explains how CoolIT helps customers deliver AI systems at speed and scale through proven capacity, modular solutions, and dedicated engineering support. Listeners will gain insight into the trends shaping adoption, examples of customer success, and what the future holds for high performance and sustainable cooling.

In this episode, we're joined by Justin Loritz, Product Manager for Large Diesel at Rehlko, to explore how the company is redefining the role of a manufacturer in today's dynamic data center landscape. Rehlko isn't just delivering equipment, they're delivering answers. As Justin shares, Rehlko's philosophy centers on being a true solutions provider: collaborating early, working through complexity, and staying flexible to meet each customer's unique challenges. Whether it's identifying alternative components, navigating supply constraints, or designing systems that meet aggressive density and uptime requirements, Rehlko's engineers partner closely with customers to ensure no detail is overlooked. Their process is driven by a deep understanding of the application, operational goals, and broader market context, allowing them to fine-tune specifications and avoid missteps that could compromise performance or timelines. Justin also discusses how this proactive, collaborative mindset extends beyond the customer relationship. By engaging with industry organizations like iMasons and contributing to shared challenges, like power availability and infrastructure strain, Rehlko helps move the entire ecosystem forward. Key discussion points include: What it means to be a solutions provider in a high-demand, high-stakes environment How Rehlko engineers collaborate to solve challenges before they impact project delivery Why deep application knowledge is essential to right-sizing designs and avoiding over- or under-specification How industry collaboration is key to unlocking new energy strategies, sourcing approaches, and long-term resilience For data center leaders navigating rising demand and tighter constraints, this episode highlights how Rehlko's engineering-first, collaboration-driven approach is helping customers stay ahead, delivering smarter, more resilient infrastructure for the AI-powered future.

As artificial intelligence (AI) reshapes the data center landscape, power resiliency is being tested like never before. With enormous new facilities coming online and operators exploring alternatives to diesel, the backup power market is at an inflection point. In this episode of the Data Center Frontier Show, we sit down with Ricardo Navarro, Vice President of Global Solutions at Generac Power Systems, to discuss how the company is positioning itself as a major player in the data center ecosystem. Diesel Still Reigns — For Now Navarro begins by addressing the foundational question: why diesel remains the primary backup power choice for hyperscale and AI workloads. The answer, he explains, comes down to density, responsiveness, and reliability. Diesel engines respond instantly to the fluctuating loads that are common in AI training clusters, and fuel can be stored directly on-site. While natural gas is gaining traction as a bridging and utility-support solution, true redundancy requires dual pipelines — a level of infrastructure not yet common in data center deployments. That said, Navarro is clear that the story doesn't end with diesel. He sees a future where natural gas, paired with batteries, becomes a cost-effective and environmentally attractive option. Hybrid systems, combined with demand response and grid participation programs, could give operators new tools for balancing reliability and sustainability. “Natural gas might not be the right solution right now, but definitely it will be in the future,” Navarro notes. Scaling Fast to Meet Hyperscaler Demands The conversation also explores how hyperscalers are shaping requirements. With campuses needing hundreds of generators, customers are asking not just about product performance, but about scale, lead times, and support. Generac is addressing that demand by delivering open sets in as little as 30 to 35 weeks — about a third of the wait time from traditional OEMs. That speed-to-deployment advantage has driven significant new interest in Generac across the hyperscale sector. From Generators to Energy Technology Equally important is Generac's shift toward digital tools and predictive services. Over the past decade, the company has invested in acquisitions such as Deep Sea Electronics, Blue Pillar, and Off Grid Energy, expanding its expertise in controls, telemetry, and microgrid integration. Today, Generac is layering advanced sensors, machine learning, and AI-driven analytics onto its equipment fleet, enabling predictive failure detection, condition-based maintenance, and smarter load orchestration. This evolution, Navarro explains, represents Generac's transformation “from being just a generator manufacturer to being an energy technology company.” What's Next for Generac Looking ahead, the company is putting real capital behind its ambitions. Generac recently completed a $130 million facility in Beaver Dam, Wisconsin, designed to expand production capacity and meet surging demand from data center customers. With firm domestic and international orders already in place, Navarro says the company is determined “to be in the driver's seat” as AI accelerates the need for scalable, resilient, and flexible backup power. For data center leaders, this episode provides a clear look into how backup power strategies are evolving — and how one of the industry's largest players is preparing for the next wave of energy and infrastructure challenges.

Columbus Hosts First Nvidia HGX B200 AI Cluster, Scaling AI at the Aggregated Edge In this episode of the Data Center Frontier Show, Matt Vincent sits down with Bill Bentley (Cologix) and Ken Patchett (Lambda) to discuss Columbus, Ohio's first Nvidia HGX B200 AI cluster deployment. The conversation dives into: Why Columbus is emerging as a strategic hub for AI workloads in the Midwest. How Lambda's one-click clusters and Cologix's interconnection-rich campus enable rapid provisioning, low-latency inference, and scalable enterprise AI. Flexible GPU consumption models that lower entry barriers for startups and allow enterprises to scale efficiently. Innovations in energy efficiency, cooling, and sustainability as data centers evolve to meet the demands of modern AI. The impact on regional industries like healthcare, manufacturing, and logistics—and why this deployment is a repeatable playbook for future AI clusters. Join us to hear how AI is being brought closer to the point of need, transforming the Midwest into a next-generation AI infrastructure hub.

Artificial intelligence is changing the data center industry faster than anyone anticipated. Every new wave of AI hardware pushes power, density, and cooling requirements to levels once thought impossible — and operators are scrambling to keep pace. In this episode of the Data Center Frontier Show, Schneider Electric's Steven Carlini joins us to unpack what it really means to build infrastructure for the AI era. Carlini explains how the conversation around density has shifted in just a year: “Last year, everyone was talking about the one-megawatt rack. Now densities are approaching 1.5 megawatts. It's moving that fast, and the infrastructure has to keep up.” These rapid leaps in scale aren't just about racks and GPUs. They represent a fundamental change in how data centers are designed, cooled, and powered. The discussion dives into the new imperatives for AI-ready facilities: Power planning that anticipates explosive growth in compute demand. Liquid and hybrid cooling systems capable of handling extreme densities. Modularity and prefabrication to shorten build times and adapt to shifting hardware generations. Sustainability and responsible design that balance innovation with environmental impact. Carlini emphasizes that operators can't treat these as optional upgrades. Flexibility, efficiency, and sustainability are now prerequisites for competitiveness in the AI era. Looking beyond hardware, Carlini highlights the diversity of AI workloads — from generative models to autonomous agents — that will drive future requirements. Each class of workload comes with different power and latency demands, and data center operators will need to build adaptable platforms to accommodate them. At the Data Center Frontier Trends Summit last week, Carlini expanded further on these themes, offering insights into how the industry can harness AI “for good” — designing infrastructure that supports innovation while aligning with global sustainability goals. His message was clear: the choices operators make now will shape not just business outcomes, but the broader environmental and social impact of the AI revolution. This episode offers listeners a rare inside look at the technical, operational, and strategic forces shaping tomorrow's data centers. Whether it's retrofitting legacy facilities, deploying modular edge sites, or planning new greenfield campuses, the challenge is the same: prepare for a future where compute density and power requirements continue to skyrocket. If you want to understand how the world's digital infrastructure is evolving to meet the demands of AI, this conversation with Steven Carlini is essential listening.

Are you facing challenges with Edge Computing in your organization? Join us as we explore how Penguin Solutions' Stratus ztC Edge platform combined with Kubernetes management creates a powerful, low-maintenance Edge Computing solution. Learn how to: Leverage Kubernetes for scalable, resilient Edge Computing Simplify edge management with automated tools Implement robust security strategies Integrate Kubernetes with legacy operations Don't miss this opportunity to optimize your Edge Computing infrastructure with cutting-edge tools and practices. This podcast is ideal for IT leaders and engineers looking to optimize their Edge Computing infrastructure with cutting-edge tools and practices.

In this episode of the Data Center Frontier Show podcast, we sit down with Martin Renkis, Executive Director of Global Alliances for Sustainable Infrastructure at Johnson Controls, to explore how Data Center Cooling as a Service (DCCaaS) is changing the way operators think about risk, capital, and sustainability. Johnson Controls has delivered guaranteed infrastructure services for over 40 years, shifting cooling from a CAPEX burden to an OPEX model. The company designs, builds, operates, and maintains systems under long-term agreements that transfer performance risk away from the operator. Key to the model is AI-driven optimization through platforms like OpenBlue, paired with financial guarantees tied directly to customer-defined KPIs. A joint venture with Apollo Group (Ionic Blue) also provides flexible financing, freeing up capital for land or expansion. With rising rack densities and unpredictable AI factory demands, Renkis says cooling-as-a-service offers “a financially guaranteed safety net” that adapts to change while advancing sustainability goals. Listen now to learn how Johnson Controls is redefining cooling for the AI era.

As AI workloads reshape the data center landscape, speed to power has overtaken sustainability as the top customer demand. On this episode of the Data Center Frontier Show, Editor-in-Chief Matt Vincent talks with Brian Melka, CEO of Rehlko (formerly Kohler Energy), about how the century-old power company is helping operators scale fast, stay reliable, and meet evolving energy challenges. Melka shares how Rehlko is quadrupling production, expanding its in-house EPC capabilities, and rolling out modular power blocks through its Wilmott/Wiltech acquisition to accelerate deployments and system integration. The discussion also covers the balance between diesel reliability and greener alternatives like HVO fuel, hybrid power systems that combine batteries and engines, and strategies for managing noise, emissions, and footprint in urban sites. From rooftop generator farms in Paris to 100MW hyperscale builds, Rehlko positions itself as a technology-agnostic partner for the AI era. Listen now to learn how the company is helping the data center industry move faster, smarter, and more sustainably.

Smarter Security Starts with Key & Equipment Management In data centers, physical access control is just as critical as cybersecurity. Intelligent key and equipment management solutions help safeguard infrastructure, reduce risk, and improve efficiency — all while supporting compliance. Key Benefits: Enhanced Security – Restrict access to authorized personnel only Audit Trails – Track every access event for full accountability Operational Efficiency – Eliminate manual tracking and delays Risk Reduction – Prevent loss, misuse, or unauthorized access System Integration – Connect with access, video, and visitor tools Regulatory Support – Comply with ISO 27001, SOC 2, HIPAA & more A smart solution for a high-stakes environment — because in the data center world, every detail matters.

New DCF Podcast Episode Breaks Down the Real Work Behind Energy and Emissions Metrics In the latest episode of the Data Center Frontier Podcast, Editor-in-Chief Matt Vincent sits down with Jay Dietrich, Research Director of Sustainability at Uptime Institute, to examine what real sustainability looks like inside the data center — and why popular narratives around net zero, offsets, and carbon neutrality often obscure more than they reveal. Over the course of a 36-minute conversation, Dietrich walks listeners through Uptime's expanding role in guiding data center operators toward measurable sustainability outcomes — not just certifications, but operational performance improvements at the facility level.

In this episode of the Data Center Frontier Show, Editor-in-Chief Matt Vincent speaks with LiquidStack CEO Joe Capes about the company's breakthrough GigaModular platform — the industry's first scalable, modular Coolant Distribution Unit (CDU) purpose-built for direct-to-chip liquid cooling. With rack densities accelerating beyond 120 kW and headed toward 600 kW, LiquidStack is targeting the real-world requirements of AI data centers while streamlining complexity and future-proofing thermal design. “AI will keep pushing thermal output to new extremes,” Capes tells DCF. “Data centers need cooling systems that can be easily deployed, managed, and scaled to match heat rejection demands as they rise.” LiquidStack's new GigaModular CDU, unveiled at the 2025 Datacloud Global Congress in Cannes, delivers up to 10 MW of scalable cooling capacity. It's designed to support single-phase direct-to-chip liquid cooling — a shift from the company's earlier two-phase immersion roots — via a skidded modular design with a pay-as-you-grow approach. The platform's flexibility enables deployments at N, N+1, or N+2 resiliency. “We designed it to be the only CDU our customers will ever need,” Capes says. Tune in to listen to the whole discussion, which goes on to explore why edge infrastructure and EV adoption will drive the next wave of sector innovation.

Every second an AI-enabled data center operates, it produces massive amounts of heat. Cooling needs are often thought of separately from heat, and for years, that is how systems were built. In most facilities, waste heat has to be managed, properly expelled, and is then forgotten. The heat may not be needed by the data center, but the question arises, ‘where else could this energy be put to use?' What if energy use was viewed differently by data centers and the systems and institutions around them? Rather than focusing on a data center's enormous power demands, let's recognize data centers are part of a larger energy network, capable of giving back through the recovery and redistribution of thermal waste. The pursuit of heat reuse solutions drives technological advancements in data center cooling and energy management systems. Recovering waste heat isn't just a matter of technology and hardware. Systems need to run smoothly, and uptime is critical. This can lead to the development of more efficient and sustainable technologies that benefit not only data centers but the communities they operate within, creating a symbiotic relationship. Join Trane® expert Esti Tierney as she explores critical considerations for enabling heat reuse as part of the circular economy. Esti will discuss high computing's growing impact on heat production, the importance of a holistic view of thermal management, and why the need to collaborate and plan a heat redistribution strategy early with community stakeholders matters. Heat reuse in data centers is a crucial aspect of modern energy management and sustainability practices, offering benefits that extend beyond the immediate operational efficiencies. Designing for optimized energy efficiency and recovering waste heat isn't just about saving money. The ability to reduce energy demand on the grid will be critical for all today and into the future. As server densities increase and next-generation chips push power demands ever higher, waste heat is no longer a byproduct to manage — it's power waiting to be harnessed.

As AI reshapes the digital infrastructure landscape, data center design is evolving at every level. In this episode of the Data Center Frontier Show, we sit down with JP Buzzell, Eaton's VP and Data Center Chief Architect, and Doug Kilgariff, Strategic Accounts Manager, to explore the key shifts driving the next generation of compute environments. Topics include: Purpose-built vs. retrofit approaches to AI infrastructure. Liquid cooling requirements for GPU clusters. Modular power design and construction. Behind-the-meter energy strategies. Data center workforce shortages. Eaton's evolving role and insights from its Data Center Vision event. From rethinking site selection to solving for stranded assets and building talent pipelines, Buzzell and Kilgariff provide a practical, forward-looking view on the forces shaping AI-era data centers. Listen now to get the inside track on powering the future of AI infrastructure.

In this wide-ranging conversation, EdgeCore Digital Infrastructure CEO Lee Kestler joins the Data Center Frontier Show to discuss how the company is navigating the AI-fueled demand wave with a focused, disciplined strategy. From designing water-free campuses in the Arizona desert to long-term utility partnerships and a sober view on nuclear and behind-the-meter power, Kestler lays out EdgeCore's pragmatic path through today's high-pressure data center environment. He also shares insights on the misunderstood public perception of data centers, and why EdgeCore is investing not just in infrastructure, but in the communities where it builds.