We’re taking a closer look at a topic that’s no longer optional for data‑center leaders: sustainability with measurable accountability. As carbon regulations tighten, especially around Scope 3 emissions, owners and operators are rethinking how they specify and source every component in the power chain. At the same time, supply‑chain pressures, copper constraints, and new state‑level requirements like on‑premise power for large sites are introducing new complexities into design, procurement, and long‑term planning. Joel Wynn, VP of Data Center Sales at Southwire, brings a unique end‑to‑end perspective, spanning mining practices, material traceability, advanced conductor engineering, Environmental Product Declarations, and the real‑world challenges hyperscalers and colos face when trying to reduce embodied carbon.
Hear a conversation about how reduced‑carbon copper, transparent supply chains, and next‑generation power infrastructure can meaningfully move the needle on sustainability and how data‑center developers can prepare for the regulatory, technical, and community‑driven expectations coming next.
Where does power innovation come into play in the context of sustainability? We are already seeing shifts in the industry and the move to on-premise power. Southwire is focused on bringing innovation to the industry from the mining companies to the data center, all while identifying opportunities to upgrade existing cable for greater efficiency.
As AI data center campuses scale toward gigawatt capacity, the industry is confronting a new kind of bottleneck. Not just how to generate power, but how to move it efficiently across increasingly complex environments.
In this episode of the Data Center Frontier Show Podcast, MetOx CEO Bud Vos outlines why traditional copper-based power distribution may be approaching its limits, and how high-temperature superconducting (HTS) wire could offer a fundamentally different path forward.
“When you start looking at gigawatt-type campuses, you find three fundamental constraints—the grid interconnect, campus distribution, and delivery inside the data hall,” Vos explains. At each layer, scaling with copper drives exponential increases in materials, infrastructure, and complexity.
HTS technology changes that equation. By delivering roughly 10x the power density of copper, superconducting cables can dramatically reduce the physical footprint of power infrastructure, replacing dozens of conventional cables with just a few, while also cutting material use and simplifying system design.
The technology also reverses a key trend in data center power architecture. Instead of pushing voltage higher to compensate for copper limitations, superconductors enable higher current at lower voltage, potentially simplifying electrical systems across the facility.
Just as importantly, superconductors are effectively lossless. “They don’t generate heat as part of the power delivery infrastructure,” Vos notes, a property that could reshape how operators think about thermal management in high-density AI environments. While HTS systems require cooling with liquid nitrogen, that requirement may align with the industry’s broader shift toward liquid cooling.
Beyond engineering, HTS could also play a role in easing permitting and community opposition by reducing the physical footprint of power infrastructure. Narrower rights-of-way and fewer materials translate into less visible impact—an increasingly important factor as data center development faces growing scrutiny.
Crucially, superconducting systems are not theoretical. They have already been deployed in utility environments, providing a track record of reliability that may help accelerate adoption in the data center sector.
As onsite and behind-the-meter generation become more common, HTS is particularly well-suited to moving large amounts of power across multi-building campuses and into high-density data halls. At the same time, the technology offers a potential alternative to strained supply chains for copper and traditional electrical equipment.
Looking further ahead, superconductivity’s role may extend even deeper, with HTS materials also serving as a foundation for emerging fusion energy systems, hinting at a future where power generation and data center infrastructure are more tightly linked.
For now, Vos sees the industry at the beginning of an adoption cycle. “We’re deploying, testing, and then innovating on top of that,” he says.
As AI infrastructure enters its execution phase, superconductivity may move from a niche technology to a core component of how the next generation of data centers is powered.
A look at the major trends shaping the data center and HVAC industries in 2026. Key topics include the growing role of high-voltage DC for improved power quality, the rise of liquid cooling, and how air-cooling technologies continue to play a critical part across the data center ecosystem.
Industry discussions also touch on innovation momentum coming out of recent events, shifting demand toward high growth markets, and the increasing importance of localized manufacturing to reduce lead times, navigate tariffs, and strengthen supply chain resilience—especially as AI driven data center expansion accelerates.
Themes such as energy efficiency, grid capacity limitations, hybrid cooling approaches, and system level optimization frame a broader question for operators and suppliers alike: Where do you fit within the data center system, and how are you preparing for what comes next?
Subzero Engineering is pleased to announce the acquisition of the Dissolvable Air Barrier (DAB) Panels product line from Cambridge R&D, further expanding Subzero’s portfolio of data center containment solutions and reinforcing its commitment to safety, performance, and turnkey system delivery.
DAB Panels are a unique overhead containment solution designed to provide effective airflow separation during normal data center operation while dissolving within seconds when exposed to water during sprinkler activation. This dissolvable design helps eliminate falling panel hazards and supports safer fire suppression outcomes—addressing a critical challenge found in traditional rigid overhead containment systems.
“With this acquisition, we’re strengthening our ability to deliver truly integrated, safety-driven containment solutions,” said Shane Kilfoil, President of Subzero Engineering. “DAB Panels complement our existing containment portfolio and give our customers another proven option to address airflow management and fire safety without compromise.”
DAB Panels are engineered for both hot aisle and cold aisle containment applications and offer a combination of airflow performance, safety, and installation flexibility. Made from EPA-certified, plant-based cellulose materials, the panels achieve Class A fire and smoke performance, producing low heat and minimal smoke while maintaining visibility for emergency personnel.
Despite their dissolvable design, DAB Panels remain durable during normal operation—withstanding high static air pressure and maintaining airflow separation where it matters most. Panels can be easily modified in the field to accommodate varying cabinet heights and existing infrastructure, eliminating the need to relocate sprinkler heads and reducing installation time and cost.
DAB Panels integrate seamlessly across Subzero’s full portfolio of data center containment products, including aisle frames, doors, roofs, and airflow management systems. This unified approach enables Subzero to deliver turnkey containment solutions engineered for performance, safety, and long-term scalability—backed by a single partner and a coordinated system designed to work together.
In this episode of the Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent speaks with Michael Siteman, President of Prodigious Proclivities and a long-time leader and board member within 7x24 Exchange International, about how data center development is being reshaped by AI, power scarcity, network strategy, and community resistance.
Siteman explains how site selection has evolved from a traditional real estate exercise into a far more complex infrastructure challenge.
“The business used to be a pure real estate play,” Siteman says. “Now it’s a systems engineering problem. It’s power, network topology, the real estate itself, and political risk.”
The conversation explores the growing dominance of power in development strategy, including the rapid rise of behind-the-meter generation as utilities struggle to keep pace with demand. Siteman notes that attitudes toward onsite generation have shifted dramatically in just the past few months.
“Six months ago, people would say, ‘If you don’t have grid interconnection, we’re not interested,’” he says. “In the last 30 days, it’s completely different.”
Vincent and Siteman also discuss the balance between network access and power access, the risks of pre-leasing capacity before buildings are completed, and the growing importance of local politics and government relations in getting projects approved.
The episode closes with a look at the widening gap between traditional hyperscale facilities and AI factories, the question of whether AI infrastructure is heading toward a bubble, and the industry’s urgent workforce shortage.
“Data centers don’t run themselves,” Siteman says. “We simply don’t have enough people to build and operate the infrastructure that’s coming.”
This is a grounded, field-level conversation about what is really driving data center development in the AI era, and what the industry will need to solve next.
The AI infrastructure boom is rapidly reshaping how the data center industry thinks about power. What was once a relatively straightforward utility procurement exercise is evolving into a complex strategy spanning onsite generation, fuel logistics, financing, and system architecture.
That reality framed a recent special edition of The Data Center Frontier Show Podcast, which recast and updated a pivotal DCF Trends Summit 2025 session: From Grid to Onsite Powering: Optimizing Energy Behind the Meter for Data Centers.
Moderated by Fengrong Li, Senior Managing Director at FTI Consulting, the panel explored how operators are responding as interconnection timelines stretch and AI workloads surge. Li’s framing emphasized a core shift: onsite power is moving from contingency planning to critical-path infrastructure.
From the OEM perspective, David Blank of Siemens Energy noted that behind-the-meter deployments have accelerated sharply over the past year as developers confront multi-year waits for firm utility capacity.
“Everyone would prefer grid power,” Blank said. “But in many cases, reliable access isn’t available for five, ten, even ten-plus years.”
Panelists agreed that AI’s scale and speed are driving a structural rethink. Brian Gitt of Oklo described the moment as a return to industrial roots, with large loads once again building dedicated generation to meet growth timelines.
At the same time, new technical pressures are emerging. AI clusters can produce sharp load swings, forcing developers to deploy fast-response buffering technologies such as batteries, flywheels, and supercapacitors to maintain stability.
Despite differing technology paths—including gas turbines, hydrogen fuel cells, and advanced nuclear—the panel aligned on one common theme: modularity. Phased power blocks increasingly mirror how AI campuses are actually built and financed.
The discussion also highlighted the growing importance of contract structures. Long-term offtake commitments, capacity reservations, and credit support are increasingly required to unlock equipment queues and fuel supply.
Other panelists included Marty Trivette of AlphaStruxure and Yuval Bachar of ECL. The event was hosted by Data Center Frontier’s Matt Vincent.
The takeaway was clear: in the AI era, energy strategy has moved to the critical path—and for many operators, that path now runs behind the meter.
The data center industry is racing into the AI era with bigger campuses, tighter timelines, and unprecedented infrastructure complexity. But in this episode of The Data Center Frontier Show Podcast, 7x24 Exchange International founding member and Mission Critical Global Alliance (MCGA) board member Dennis Cronin argues the industry’s biggest constraint may be the one it talks about least: people.
Cronin’s message is direct: the “talent cliff” isn’t coming; it’s already here. Based on recent research into open roles, he estimates 467,000 to 498,000 openings in core data center positions (facilities and ops leadership, electrical, generator/UPS, HVAC, controls), plus another ~514,000 emerging roles tied to AI infrastructure, sustainability, and cyber-physical security—bringing the total to roughly one million jobs the industry needs to fill.
A major driver is what Cronin calls the “five-year experience trap”: employers require five years of experience even for entry-level roles, but newcomers can’t get experience without being hired. The result is widespread talent poaching, involving workers jumping from site to site for 10–20% raises, without expanding the overall labor pool.
Cronin also highlights a frequently missed reality in public policy debates: the job multiplier effect. While data centers may have lean direct staffing, they support a much larger ecosystem of contractors, service providers, and manufacturers, from generator and UPS technicians to security integrators and the electrical/mechanical supply chain, many of whom are already scrambling to hire.
On training, Cronin explains why company-run programs and commercial training aren’t enough on their own. Internal academies often produce siloed specialists trained for a single operator’s environment, while commercial courses, often ~$1,000 per day per person, are typically designed to upskill people already in the industry, not onboard new entrants.
MCGA’s strategy focuses on community colleges as the most scalable on-ramp: affordable programs, scholarships, and hands-on labs that can produce strong technicians in two-year degrees. Cronin cites programs at Cleveland Community College (NC), Northern Virginia Community College, and Southside Community College (VA), noting that dozens of schools are exploring data center curricula but funding remains a barrier.
Cronin’s proposed solution is a true workforce ecosystem: outreach, standardized curriculum, certification labs, structured apprenticeships, and employer commitments. He also advocates replacing the “five years” requirement with an entry-level certification that proves foundational knowledge, i.e. acronyms and language, reading one-lines, SOPs/MOPs, and crucially, safety and situational awareness in electrical and mechanical environments.
Finally, Cronin tackles the money question. With $60B in data centers announced this year, he says the industry needs a major, shared investment across operators, vendors, contractors, and manufacturers to fund training and scholarships at scale. The stakes are operational: in an era of gigawatt AI facilities and shrinking margins for error, workforce readiness is now a mission-critical issue.
In the latest episode of The DCF Show Podcast, Data Center Frontier founder Rich Miller joins present DCF Editor in Chief Matt Vincent and Senior Editor David Chernicoff to examine where the data center industry stands as AI infrastructure moves from announcement to execution.
Miller also discusses his new Data Center Richness podcast and Substack project, which explores how data center professionals consume content and learn about the rapidly evolving industry. With information overload now a reality, Miller’s goal is to distill the most important signals shaping infrastructure decisions.
The conversation then turns to what defines 2026 for data centers: execution. After a year filled with megaproject announcements, the industry now faces the harder task of actually delivering campuses at AI scale—often under severe power constraints.
With utilities struggling to keep pace, on-site generation is shifting from temporary solution to long-term strategy, as developers seek reliable ways to power projects while easing community concerns about grid impacts.
Public resistance has also become a major factor. Miller notes that community opposition is now delaying or halting billions of dollars in projects, forcing operators to rethink how they engage with local stakeholders. Issues like power pricing and water usage are increasingly central to project approval.
On the technology front, Nvidia’s roadmap continues to reshape infrastructure planning, with rack densities rising sharply, liquid cooling becoming standard, and new power distribution models emerging to support AI factories. At the same time, Miller expects the market to stratify, with some operators specializing in AI factories while others serve cloud and enterprise demand.
The discussion also touches on nuclear power’s future role, with data centers positioning themselves as anchor customers, though meaningful SMR deployment remains years away.
Ultimately, Miller argues that the industry is moving faster than ever, and 2026 will reveal how well today’s massive investments translate into real deployments.
As he concludes: the next phase belongs to those who can deliver.
In this installment of Nomads at the Frontier, Data Center Frontier Editor-in-Chief Matt Vincent checks in with Nomad Futurist founders Nabeel Mahmood and Phillip Koblence for on-the-ground reflections from PTC 2026 in Hawaii, and a clear signal that the digital infrastructure market is shifting from hype to delivery.
Mahmood says PTC 2026 reaffirmed the move toward integrated digital infrastructure, with attendance continuing to grow and conversations increasingly translating into real progress. But the defining theme across AI, investment, and deployments was power. As Koblence puts it, “all of those questions are power”—and unlike prior years, the tone has moved from speculative site talk to “show me the money, show me the power,” with real timelines and secured capacity.
The episode digs into the industry’s evolving stance on behind-the-meter generation, which is increasingly treated as the most viable medium-term path to getting online as grid bureaucracy and interconnection delays become the “long pole in the tent.” The discussion also tackles the sustainability tension in that shift: why the industry often kicks the can down the road, what alternative options (fuel cells, hydrogen) may offer, and why nuclear timelines don’t solve the near-term gap.
Mahmood and Koblence also emphasize that the buildout isn’t just a power story; it’s a people and community story. Workforce shortages remain structural and long-lived, and community acceptance is now central to the industry’s “license to build.” Nomad Futurist’s mission, they argue, is becoming a bridge between digital infrastructure and the public, demystifying what the industry is, why it matters, and how the next generation can enter it.
Finally, the conversation pressures-tests the AI boom: Mahmood predicts the “mega-scale AI factory” bubble will burst within three to five years, with growth shifting toward inferencing closer to users, but he still expects the sector to normalize into sustained double-digit expansion. And on Nvidia’s roadmap, both founders call for realism: megawatt racks may be coming, but as Koblence notes, “there are zero facilities” today that can support a 1–1.5 MW rack at scale.
In the latest episode of the Data Center Frontier Show Podcast, Editor in Chief Matt Vincent speaks with Sailesh Krishnamurthy, VP of Engineering for Databases at Google Cloud, about the real challenge facing enterprise AI: connecting powerful models to real-world operational data.
While large language models continue to advance rapidly, many organizations still struggle to combine unstructured data (i.e. documents, images, and logs) with structured operational systems like customer databases and transaction platforms. Krishnamurthy explains how vector search and hybrid database approaches are helping bridge this gap, allowing enterprises to query structured and unstructured data together without creating new silos.
The conversation highlights a growing shift in mindset: modern data teams must think more like search engineers, optimizing for relevance and usefulness rather than simply exact database results. At the same time, governance and trust are becoming foundational requirements, ensuring AI systems access accurate data while respecting strict security controls.
Operating at Google scale also reinforces the need for reliability, low latency, and correctness, pushing infrastructure toward unified storage layers rather than fragmented systems that add complexity and delay.
Looking toward 2026, Krishnamurthy argues the top priority for CIOs and data leaders is organizing and governing data effectively, because AI systems are only as strong as the data foundations supporting them.
The takeaway: AI success depends not just on smarter models, but on smarter data infrastructure.
🎧 Listen to the full episode to explore how enterprises can operationalize AI at scale.
The data center industry is changing faster than ever. Artificial intelligence, cloud expansion, and high-density workloads are driving record-breaking energy and cooling demands. But behind every megawatt of compute capacity lies an equally critical resource: water.
As data halls evolve from static infrastructure to dynamic, service-driven ecosystems, cooling has emerged as one of the most powerful levers for efficiency, reliability, and sustainability. In this episode, Ecolab explores how Cooling as a Service (CaaS) is transforming data center operations, shifting cooling from a capital expense to a measurable, performance-based service that drives uptime, reliability, and environmental stewardship.
Tune in to hear experts discuss how data centers can future-proof their operations through a smarter, service-oriented approach to thermal management. From proactive analytics to commissioning best practices, this conversation dives into the technologies, partnerships, and business models redefining how cooling is managed and measured across the world’s most advanced digital infrastructure.