Artificial intelligence is reshaping the global economy—and the infrastructure that supports it is struggling to keep up.
On one side of the spectrum, there’s Vermaland’s audacious $33 billion vision to build a mega data center hub in Arizona—an ambitious response to soaring demand for AI compute. On the other side, in China, experts are returning stunned by how seamlessly AI infrastructure is already integrated into the electrical grid and broader ecosystem.
Land developer Vermaland’s proposed project—dubbed the La Osa Project—could transform part of Pinal County, Arizona, into one of the largest data center campuses in the U.S. They’re pitching a $33 billion build spanning 3,300 acres, with a whopping 3 GW capacity—greater than what’s currently available in the entire Phoenix metro area .
Officials expressed concerns about the environmental impacts, flood zones, and infrastructure strain during rezoning discussions. Still, Vermaland envisions the site as a massive hub where companies can lease or buy space, powered by upgraded transmission lines from the Western Area Power Administration .
This scale of build-out reflects just how critical AI-ready compute infrastructure has become. McKinsey estimates that by 2030, the world will need nearly $6.7 trillion in new data center investments to support AI growth . The La Osa Project would be a bold, tangible response to that pressing demand.
When U.S. experts visit Chinese data center hubs, they come back amazed—not by the hardware, but by the power infrastructure. As one expert noted:
“Everywhere we went, people treated energy availability as a given.”
In China, building enough power to support AI data centers is considered a solved problem .
That’s because China has invested heavily in hydropower, nuclear, and renewables—resulting in electricity prices that are low and panels that run at 80–100 percent reserve capacity . China consciously over-builds power generation, enabling rapid expansion of AI hubs without grid concerns. Tech insiders say China can “hit grand slams,” while the U.S.—with a fragile grid—can “at best, get on base” .
Experts warn: U.S. AI development risks being slowed not by chip shortages, but by power constraints. Tech giants are building private power plants, or even planning nuclear reactors, just to keep up with compute demand .
| Challenge | United States (Arizona Example) | China |
| Scale of Data Center Capacity | La Osa Project could house 3 GW—more than entire Phoenix metro | Hundreds of new data centers already built thanks to energy excess |
| Grid Infrastructure | Strained grid; separate builds needed (e.g., private power plants) | Resilient, high reserve grid with built-in flexibility |
| Regulatory & Environmental Concerns | Rezoning hurdles, environmental scrutiny | Top-down planning allows fast scaling with less visible friction |
| Funding & Strategic Posture | Private-led, expensive, piecemeal | State-backed, long-term planning, supportive of domestic infrastructure |
| AI Growth Risks | Power bottlenecks may limit AI potential | Power is abundant—AI centers are enabled, not constrained |
The two approaches—Arizona’s bold, private mega-builds versus China’s state-led, grid-first ecosystem—reflect contrasting models for scaling AI infrastructure. The key question now: can the U.S. resolve its grid fragility to support AI growth at scale—before energy, not ideas, defines the race?