The data center boom is reshaping the grid faster than traditional planning models can keep up – and nobody in the room needed convincing.
What the session, moderated by Eva Guadamillas of Carbon Direct, actually wrestled with was harder: how do you plan for growth at this scale when the numbers themselves are uncertain, the regulations are still being written, and the technology is being stress-tested in real time?
The panel brought together Doug Bryan (Carbon Direct), Mark Crowdis (127 Energy), Jon Parrella (TerraFlow Energy), Attilio Ravani (Orennia), and Anna Siefken (LDES Council) for a spirited discussion.
Separating Signal from Noise
PJM is projecting a 20% increase in peak load through 2030. ERCOT’s interconnection queue has tripled in a single year. Will any of it actually materialize?
Ravani offered a three-part framework for stress-testing forecasts: start with publicly announced projects, which are a fraction of what’s sitting in interconnection queues. Layer in transmission constraints – many projects simply can’t connect even if they’re real. Then apply a political reality check. “No state governor is going to allow their grid to get to a blackout situation,” he said. “If supply can’t keep up, those projects are going to have to delay, go somewhere else, or bring their own power.”
Bryan added that the uncertainty is driving innovation regardless. Flexibility – through demand response, batteries, and long-duration storage – is becoming the primary mechanism for managing a grid that could see data centers represent 5-15% of US load by 2030. “How do you ensure you don’t break the system? You maximize the capacity you have available. You do that through flexibility.”
Regulation Catches Up
Guadamillas noted that politicians on both sides of the aisle are now calling to slow data center growth – a sign of how quickly affordability has become a political flashpoint – and asked the panel what FERC and DOE are actually trying to accomplish with new interconnection rules.
Siefken, drawing on her DOE background, framed the core tension: connect load quickly without shifting excess costs onto hospitals, neighbors, and municipal facilities. Her prescription was architectural: “The interconnection rules should incorporate duration.” Short-duration batteries handle ramps and intraday variability; long-duration storage provides multi-day adequacy during stress events – and reduces the need to socialize additional firm capacity onto ratepayers. She emphasizes, “it can work if resilience is actually embedded in the architecture.”
Crowdis took a more economic angle. “At the end of the day, it’s: how much are you going to charge me to get energy?” Virginia’s approach – allocating 85% of transmission costs directly to data centers – at least gave developers something certain to underwrite. His broader advice: work closely with utilities and help solve their problems. “Every utility has 200 hours a year where they don’t have enough energy. If you can help them solve that, you can help everyone.”
Texas’s SB6 – requiring new large loads to be curtailable, pay their own interconnection costs, and disclose backup generation – generated the sharpest exchanges. Parrella was blunt about why data centers have been flooding interconnection queues in the first place. “The cost of power for one gigawatt for one hour is $70,000 to $80,000. The revenue from compute for that same hour is anywhere from a million to a billion dollars. They don’t care about power costs. They care about time.” SB6, he argued, forces a long-overdue reckoning with speculative queue clogging from land developers trying to inflate site values.
Bryan’s modeling put numbers to the stakes. Requiring just a handful of flexible hours – less than 1% – eliminates projected load-shedding risk in ERCOT by 2028 and saves billions in consumer welfare. “When we model demand response, we’re essentially creating ghost batteries at the same node as the load. And those ghost batteries can exist as real battery storage in the real world.”
The Battery Debate
An audience member cut to the chase: if a data center is just a constant high load, why doesn’t lithium-ion simply work?
Parrella pushed back on the premise. AI data centers don’t draw constant load – their power curves look like an EKG. Batteries cycling multiple times per minute blow through their 3,000–7,000 cycle design limits quickly, and undersized systems compound the problem. “You’re exponentially increasing the probability of thermal runaway. Lithium is not the right solution because of the degradation and fire risk.”
Crowdis offered a counterpoint from the field: a contact at a top-three IPP reported LFP working across dozens of projects – though batteries are degrading roughly two years faster than expected. “I’m just giving you the data from the developer putting them in the ground.”
Siefken brought it home for long-duration storage. “We’re not having any of these issues. Costs are coming down as much as 47% between now and 2030. You’ll have something at cost parity that is an alternative.” Parrella added that the value goes beyond replacing lithium: “Long-duration storage can replace multiple components on campus, and drive total system costs well below what you’d pay just to swap out the batteries.”
The decisions being made right now — about architecture, interconnection, and which technologies get built at scale — will shape the grid for decades. The panelists agreed that getting them right requires cross-sector collaboration, long-term thinking, and technical rigor.