Back to all posts
AI colocationGPU colocationhigh densityliquid cooling

AI Colocation Guide: What GPU Workloads Need That Standard Racks Can't Provide

JustColo TeamMarch 15, 20268 min read

The Power Density Gap

Traditional enterprise colocation was designed for a different era. Most facilities built in the 2000s and 2010s provisioned 5-8kW per rack — plenty for general-purpose servers, storage arrays, and networking equipment. But AI has changed everything.

A single NVIDIA DGX H100 system draws 10.2kW. A fully loaded rack of GPU servers can easily exceed 40kW, with some configurations pushing past 100kW. This isn't a minor infrastructure upgrade — it's a fundamental rethinking of how data centers are built.

Why Standard Colocation Fails AI Workloads

When enterprises try to deploy AI infrastructure in traditional colocation, they hit three critical walls:

**Power Density**: Most facilities can't deliver more than 10-15kW to a single rack without major infrastructure modifications. Even if they claim "high density" support, the actual available capacity may be limited.

**Cooling Capacity**: Air cooling reaches its practical limits around 30-40kW per rack. Beyond that, you need liquid cooling — either direct-to-chip (DLC) or rear-door heat exchangers (RDHx). Most colocation providers simply don't have this infrastructure.

**Power Availability**: AI clusters are hungry. A 100-rack GPU deployment might need 4-10MW of dedicated power. Facilities designed for traditional workloads may not have the electrical infrastructure to support this concentration.

Liquid Cooling: No Longer Optional

For deployments exceeding 40kW per rack, liquid cooling isn't a luxury — it's a requirement. The two primary approaches are:

**Direct-to-Chip (DLC)**: Coolant flows directly to cold plates mounted on GPUs and CPUs, providing the most efficient heat removal. Required for the highest-density configurations.

**Rear-Door Heat Exchangers (RDHx)**: A liquid-cooled door replaces the rear of the rack, capturing heat as air exits. Effective for 40-80kW deployments without modifying server hardware.

JustColo's Princeton facility is built liquid-cooling-ready from day one, supporting both DLC and RDHx configurations for racks exceeding 100kW.

Choosing the Right AI Colocation Partner

When evaluating facilities for AI workloads, ask these questions:

  • What is the maximum power density per rack — and how many racks can support that density?
  • What liquid cooling options are available, and what's the lead time for deployment?
  • What's the total campus power capacity, and how much is available for new deployments?
  • What connectivity options exist for hybrid cloud AI workflows?

The answers will quickly separate facilities built for the AI era from those retrofitting legacy infrastructure.

Princeton: Built for AI from Day One

JustColo's Princeton facility opens in January 2027 with native support for high-density AI workloads: 15-40kW air-cooled racks, liquid cooling infrastructure for 100kW+ configurations, and direct fiber connectivity to major cloud on-ramps.

Ready to discuss your AI infrastructure requirements? Let's talk.

Ready to discuss your colocation needs?

Contact our team for a custom quote for your Princeton NJ data center deployment.

Get a Quote