Artificial intelligence is transforming the economics of electricity at a pace faster than any other digital wave. The latest build-out of “AI factories” (hyperscale data centres optimised for training and inference) is colliding with a grid that was not designed for 24/7, multi-hundred-megawatt loads sited wherever fibre, land and water are available. The result: long interconnection queues, rising curtailment of renewables, and a scramble for firm capacity. Extending the useful life and reliability of today’s grid, especially at the distribution edge, is no longer optional; it’s the only way to serve existing customers, absorb more distributed generation, and make room for AI data centres without waiting a decade for new wires.
The demand shock is real (and front-loaded)
Independent analysts now converge on the same direction of travel: steep, near-term load growth driven by data centres, particularly AI. BloombergNEF projects “electricity demand from AI training and services is set to quadruple within a decade,” pushing data centres up the ranking of fastest-growing electricity users globally.
U.S.-focused studies are even starker. Boston Consulting Group (BCG) estimates data centres could rise from ~2.5% of U.S. electricity use in 2022 to 7.5% by 2030, roughly 126 TWh to ~390 TWh, with generative AI a major contributor.
And the trend is not limited to national averages. Virginia, the “data centre capital of the world”, commissioned a 2024 grid review that concluded bluntly: “Building enough infrastructure to meet unconstrained energy demand will be very difficult… [and] building enough infrastructure to meet half of unconstrained energy demand would also be difficult.”
Timing is the tightest constraint.
The crunch is not only how much power AI will need, but when it can be delivered. Data centres go up fast; transmission and large generation do not. McKinsey notes typical data centres can be built in 18–24 months, while new power plants take 3–5 years and transmission takes 7–10 years.
Multiple sources highlight the interconnection bottleneck. BCG emphasises record-scale capacity additions are required over the next five years, while separate analyses describe multi-year delays in grid connections. (One industry assessment summarises today’s reality: “5 to 8 years to get a new data centre connected to the grid.”)
The upshot: if we wait for big wires alone, AI demand will leapfrog available capacity. Grids need near-term tools that unlock capacity within existing assets.
What “make-the-most-of-what-we-have” looks like
Grid operators are already revisiting non-wires solutions and grid-edge controls to relieve constraints faster than traditional reinforcement:
- Dynamic hosting capacity at LV (low-voltage) feeders to connect more load and DERs without breaching voltage or thermal limits.
- Real-time phase balancing and voltage regulation to turn stranded headroom on one phase into usable capacity for all.
- Power-electronics-based congestion relief at substations and along feeders to manage peak flows, flicker, and harmonics from power-dense, inverter-rich sites.
- Visibility + control: high-resolution monitoring that closes the loop between planning and operations so capacity can be safely “sweated” minute-to-minute.
These capabilities are exactly where Third Equation’s grid-edge technology (NeX) plays: rapidly deployable LV network enhancement that increases usable capacity, improves voltage quality and reliability, and raises DER hosting limits—without waiting years for new circuits. In practical terms, NeX-class solutions help network operators:
- Serve existing customers better by reducing undervoltage/overvoltage events and outage risk as local peaks climb.
- Host more distributed generation and flexibility (PV, batteries, EV charging) by actively controlling phase imbalance, voltage and power factor—critical with AI-driven loads reshaping local demand curves.
- Accommodate new, high-density loads (AI data centre spurs, edge compute, industrial electrification) by unlocking spare capacity on the LV network and pushing reinforcement further into the future—at a fraction of the cost and time of traditional upgrades.
Why this matters to siting and reliability
Policy groups and market observers expect more data centres to co-locate with on-site or near-site firm power to bypass congested grids in the short term. But even “islanded” or partially islanded campuses still rely on the distribution network for import/export balancing, black-start conditions, and backup. The Centre for Strategic & International Studies notes some operators are pairing data centres with dedicated gas plants; others explore SMRs or behind-the-meter hybrids. Regardless, the surrounding grid must manage bidirectional flows, fault currents, and quality jobs tailor-made for agile grid-edge control.
The strategic takeaway for utilities and regulators
- Load growth is back—and concentrated. AI clusters create step-changes in local demand that outpace network reinforcement timelines. (BloombergNEF: AI power demand to quadruple in ~10 years.)
- Even the best-case buildout struggles to keep up. Virginia’s JLARC: meeting unconstrained demand is “very difficult,” and even half of that is “difficult.”
- Therefore, the fastest capacity is “found” capacity. Grid-edge technologies like Third Equation’s NeXT extend the life and utility of today’s network, add operational flexibility, and create a glide path while new generation and transmission catch up.
In short, AI is forcing an “efficiency first” era for the grid. If we treat LV and MV networks as controllable, optimisable assets, not passive copper, we can keep serving homes and businesses, connect more renewables, and still welcome the next wave of AI compute.




