AI Data Center Power Consumption Calculator
Calculate energy usage, electricity costs, and carbon footprint for AI training and inference clusters. Includes database of H100, A100, MI300, and TPUs.
About
Deploying Artificial Intelligence infrastructure requires precise energy modeling. Unlike traditional web servers, AI training clusters operate at high thermal design power (TDP) densities, often running GPUs at peak capacity for weeks or months. Miscalculating the power budget affects electrical provisioning, cooling requirements (PUE), and operational expenditure (OpEx).
This tool models the total energy footprint of an AI cluster. It accounts for accelerator power (GPUs/TPUs), server overhead (CPUs, RAM, Networking), cooling efficiency, and utilization rates. Accuracy here is critical for determining the feasibility of large language model (LLM) training runs or sizing the backup power systems for inference nodes.
Formulas
The calculation splits the power load into IT equipment (Compute) and Infrastructure (Cooling/Losses). The core formula for total power consumption Ptotal is:
Where:
- Nnode is the number of server nodes in the cluster.
- Ngpu is the count of accelerators per node.
- Pgpu is the Thermal Design Power (TDP) of a single accelerator in Watts.
- Pbase is the power draw of the host system (Dual CPUs, RAM, NVMe, Fans).
- Ufactor is the utilization percentage (0.0 to 1.0).
- PUE (Power Usage Effectiveness) represents facility efficiency (Total Facility Power รท IT Equipment Power).
Total Energy E over a duration t is calculated as:
Reference Data
| Accelerator Model | Manufacturer | TDP (Watts) | Memory (VRAM) | Interconnect | Est. System Power Adder |
|---|---|---|---|---|---|
| H100 SXM5 | NVIDIA | 700 | 80 GB HBM3 | NVLink | +400W (per 4 GPUs) |
| H100 PCIe | NVIDIA | 350 | 80 GB HBM2e | PCIe Gen5 | +200W |
| A100 SXM4 | NVIDIA | 400 | 80 GB HBM2e | NVLink | +300W |
| A100 PCIe | NVIDIA | 250 | 40 GB HBM2 | PCIe Gen4 | +150W |
| V100 SXM2 | NVIDIA | 300 | 32 GB HBM2 | NVLink | +250W |
| MI300X | AMD | 750 | 192 GB HBM3 | Infinity Fabric | +450W |
| MI250X | AMD | 560 | 128 GB HBM2e | Infinity Fabric | +350W |
| Instinct MI210 | AMD | 225 | 64 GB HBM2e | PCIe Gen4 | +150W |
| Gaudi 2 | Intel | 600 | 96 GB HBM2e | Ethernet | +350W |
| TPU v4 (Pod) | 220 (est) | 32 GB HBM | ICI | N/A (Custom) | |
| TPU v5p | 450 (est) | 95 GB HBM | ICI | N/A (Custom) | |
| L40S | NVIDIA | 350 | 48 GB GDDR6 | PCIe Gen4 | +150W |
| RTX 6000 Ada | NVIDIA | 300 | 48 GB GDDR6 | PCIe Gen4 | +150W |
| T4 Tensor Core | NVIDIA | 70 | 16 GB GDDR6 | PCIe Gen3 | +50W |