Why AI data centers are clustering around gas power in Texas
The latest AI buildout in the US is converging on a blunt answer to a blunt constraint: large models need huge amounts of electricity, and gas is fast to deploy. That explains the wave of projects in Texas, Louisiana, and Tennessee. Poolside and Core...
AI data centers are being built around gas fields. That changes the engineering math
The latest AI buildout in the US is converging on a blunt answer to a blunt constraint: large models need huge amounts of electricity, and gas is fast to deploy.
That explains the wave of projects in Texas, Louisiana, and Tennessee. Poolside and CoreWeave are planning a West Texas campus called Horizon, spread across more than 500 acres and tied to Permian Basin gas, with roughly 2 GW of computing capacity and 40,000+ Nvidia chips. OpenAI’s Stargate site near Abilene needs about 900 MW across eight buildings, with a new gas-fired backup plant using naval-derived turbine tech. Meta is building a $10 billion campus in Richland Parish, Louisiana, sized for about 2 GW of compute and linked to 2.3 GW of new gas generation from Entergy. xAI’s Memphis operation also runs with gas in the mix, fed through pipeline infrastructure tied to fracked fields.
Meta’s El Paso project, matched to 100% clean power by 2028, stands out because few of these campuses are taking that route.
This is deliberate.
AI has moved into a power-first phase
For years, the standard bottleneck list was chips, networking, packaging, lead times.
Now firm power is climbing the list fast. Big GPU campuses need electricity delivered where they can use it, at the scale they need, with fewer interruptions than most public grids want to guarantee. That pushes operators toward private substations, on-site generation, islandable microgrids, and sites near pipelines and transmission corridors.
There’s political backing for this too. A July 2025 executive order fast-tracks permitting and incentives for AI data centers powered by gas, coal, or nuclear, while excluding renewables. The industry pitch is straightforward: if the US wants to outbuild China in AI, it needs power added fast.
That argument may work in Washington. It also gives developers room to choose the quickest source of energy, not the cleanest.
Why gas keeps winning
From an operator’s perspective, the case for gas is easy to understand.
A modern AI rack can draw 100 to 150 kW. A liquid-cooled NVL72-class cluster island can push past 1 MW. At that point, you’re not designing a normal enterprise data center. You’re building industrial power infrastructure and filling it with GPUs.
Training loads spike. Inference at scale can stay heavy around the clock. Video generation and multimodal systems make both harder to manage. If the cluster can’t get enough power, or the cooling system can’t handle density peaks, utilization falls and the economics get ugly fast.
Gas turbines solve several practical problems:
- they ramp quickly
- they provide black-start capability
- they can sit on-site or behind the meter
- they cut exposure to grid congestion and interconnection delays
Simple-cycle turbines are less efficient, but they’re easier to deploy and ramp. Combined-cycle systems get more electricity from the same fuel by capturing waste heat, but they add complexity and usually take longer to build. If a company is racing to add capacity before the next model cycle, speed usually wins.
That’s why these campuses are starting to look like small utilities. Think N+1 or 2N redundancy in generation and substations, battery storage for fast transients, SCADA-controlled switching, and microgrids that can island during disturbances. This is serious operational infrastructure.
Cooling improved. The energy picture didn’t
Liquid cooling is now standard at the high end because air cooling is close to its limits. Direct-to-chip cold plates and closed-loop systems help handle rack density and cut obvious evaporative water loss. That part is real.
The problem is elsewhere. Higher-density cooling systems need more pumping power, tighter control, and more supporting equipment. So direct water use can improve while the indirect water story worsens if the electricity behind the cooling comes from thermal generation.
That’s why PUE no longer tells you enough. A campus can post a decent power usage effectiveness number and still have a bad emissions and water profile. To assess impact, you need at least three metrics:
- PUE for facility overhead
- CUE for carbon intensity
- WUE for water usage effectiveness
On-site gas generation typically lands around 350 to 500 gCO2/kWh, depending on turbine efficiency and methane leakage assumptions. Combined-cycle plants do better than simple-cycle. The basic point doesn’t change: if your AI region is anchored to fracked gas, the software stack runs on fossil fuel.
It’s also a security problem
A gas-backed GPU campus widens the attack surface in ways many software teams still underrate.
The operational tech stack controlling turbines, switchgear, cooling loops, and substation gear has to be secured alongside the usual cloud and network layers. That means PLCs, vendor remote access, industrial protocols, and SCADA systems are part of the AI delivery path. Weaknesses there don’t just create downtime. They can take a region offline.
Serious operators should be mapping to NERC CIP-aligned controls even when not strictly required, segmenting IT and OT networks hard, and treating vendor access as hostile until proven otherwise. Remote maintenance links are an obvious risk. Poorly secured ICS devices sitting behind assumed-trusted management planes are another.
A lot of AI infrastructure talk still treats power as a capacity issue. It’s also a resilience and security issue.
What changes for developers and ML teams
Most developers won’t choose the substation design. They still affect how much power their workloads burn.
That matters more now because power is becoming a first-order constraint on cost, scheduling, and even region choice. If your team trains large models or serves inference at scale, energy-aware engineering is basic operational discipline.
A few areas deserve attention.
Model architecture now has visible energy costs
Mixture-of-Experts, activation sparsity, and sequence parallelism reduce FLOPs per useful result. That’s old news in research circles. What’s changed is how directly those savings map to infrastructure strain. Lower average power draw, flatter peaks, shorter wall-clock training times, and less cooling pressure all matter when the cluster is this dense.
The same goes for inference. Quantization with NF4, FP4, or practical 8-bit paths, plus compiler and runtime work in tools like TensorRT-LLM, XLA, Triton, and vLLM, can cut energy per token enough to show up on the power bill.
If your production stack still defaults to oversized dense models because “we already have the GPUs,” that assumption is getting expensive.
Scheduling gets better when you track hourly carbon intensity
Hourly grid intensity data is useful. Most teams still ignore it.
Flexible training runs can be shifted toward lower-carbon windows without touching user-facing latency. That won’t help every workload, and it matters less when a site runs mostly on dedicated gas anyway, but it still has value in mixed-grid regions like ERCOT.
A dead-simple gating pattern looks like this:
from electricitymaps import Client
client = Client(token="YOUR_TOKEN")
intensity = client.get_live_carbon_intensity(zone="US-TEXAS-ERCOT")["gCO2eq_per_kWh"]
if intensity < 300:
start_training()
else:
defer_job(hours=2)
That’s not advanced scheduling. It still beats ignoring time-of-day entirely.
Power caps are underrated
A lot of teams leave GPU power settings at maximum and call it optimization. Sometimes it’s just laziness with a benchmark attached.
For many inference and fine-tuning workloads, modest power caps barely hurt throughput but shave peak draw enough to improve thermals and cluster stability. On Nvidia hardware, nvidia-smi --power-limit is still one of the simplest knobs that has a real effect. You have to test against your own latency and throughput targets. But it often pays for itself immediately.
Bad data pipelines waste electricity at industrial scale
Duplicate samples, low-value tokens, weak curriculum design, and indiscriminate retraining all show up as excess power burn. When model runs consume megawatt-scale capacity, sloppy data hygiene stops being an abstract MLOps complaint. It becomes infrastructure waste.
Curation, deduplication, token budgeting, and targeted refresh cycles deserve more attention than they get. They also work.
The cloud market is shifting with the grid
One thing is pretty clear: GPU availability is being bundled with power certainty.
That favors operators that can secure both chips and firm energy. CoreWeave has leaned into that. Hyperscalers are doing their own versions with long-term procurement, private generation, and, where they can get them, nuclear deals. Region choice starts to look less like a latency map and more like an energy portfolio.
That will split the market. Some customers will buy capacity wherever they can get it and live with the emissions profile. Others, especially multinationals facing stricter disclosure rules in Europe and some US states, will want hourly energy mix data, water metrics, and better carbon accounting than the usual sustainability page provides.
Developers should ask for that data. If a vendor can tell you GPU type, interconnect, and storage throughput, it can tell you where the electricity came from.
The old abstraction is breaking
Software has long treated power as somebody else’s problem. AI at gigawatt scale makes that harder to sustain.
When model campuses are built next to fracked gas fields and backed by turbines, the stack changes. Energy source affects cost curves, scheduling policy, cooling design, uptime strategy, security posture, and procurement. It also affects which efficiency work gets funded. A 5% gain stops sounding academic when it maps to real capacity, real money, and local backlash over water and land use.
Engineers don’t all need to become power systems specialists. But “faster model, lower latency, higher throughput” no longer covers the full performance story.
You also need to ask what it takes to keep the cluster on.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Fix pipelines, data quality, cloud foundations, and reporting reliability.
How pipeline modernization cut reporting delays by 63%.
Microsoft has signed a large capacity deal with Nscale, the AI cloud and infrastructure company founded in 2024, to deploy about 200,000 Nvidia GB300-class GPUs across four sites in the US and Europe. The topline is huge. The site list is what gives ...
AI infrastructure is now big enough to bend energy planning around it. The International Energy Agency pegs 2025 data center investment at $580 billion, about $40 billion more than global spending on new oil supply. The number that matters after that...
Anthropic says it will spend $50 billion on U.S. data centers with Fluidstack, with the first facilities in Texas and New York due online in 2026. The number is huge, but the more telling part is the partner and the model behind the deal. Until now, ...