‘DataVolt’ – hyperscale AI. Ultra-dense NVIDIA GPU platforms

DataVolt, a Saudi Arabian data-center developer aims to deliver ultra-dense NVIDIA GPU platforms and rack-scale systems across DataVolt’s hyperscale AI campuses in Saudi Arabia and the United States.

DataVolt plans to build net-zero AI computing campus at NEOM’s Oxagon

DataVolt’s deployment of NVIDIA GPU-powered high-compute infrastructure—backed by Supermicro’s rack-scale systems, advanced liquid cooling, and renewable energy supply—marks a major leap forward in building sustainable AI data-center campuses. With scale, efficiency, and performance at the forefront, DataVolt is carving a leadership position in eco-conscious, high-performance AI infrastructure.

The GPUs at the heart of DataVolt’s compute campus are based on NVIDIA’s cutting-edge Blackwell architecture, deployed via HGX B200 rack-scale systems and NVL72 configurations. These systems deliver extreme performance-per-watt and support AI models with trillions of parameters .

DataVolt -Key Differentiator –

  • Sustainability by Design: Zero-carbon intentions, renewable power-first architecture, and water recycling across ops.
  • AI-Optimized Infrastructure: Tailored for generative AI and high-density GPU workloads through hardware collaboration and modular campus design.
  • Scalability Across Regions: Rapid expansion in the Middle East, South Asia, Central Asia, and Africa leveraging local partnerships and government support.
  • Circular Cooling Innovation: Industry-leading approaches in liquid cooling and reuse alongside chemists and utility experts.

Key Capabilities:

  • Rack-Scale GPU Clusters: Systems like the NVL72 integrate up to 72 Blackwell GPUs with Grace CPUs, offering ~130 TB/s inter-GPU bandwidth via NVLink and scalable up to hundreds of GPUs across multiple racks
  • Efficiency Gains: Supermicro’s liquid cooling infrastructure reduces data center power costs by up to 40% and TCO by ~20%, enabling denser deployment and optimized thermal management

Eco‑Aware Compute Infrastructure

DataVolt’s GPU-intensive infrastructure is tightly integrated with renewable energy sources and net-zero green hydrogen power systems operating at gigawatt scale

That integration promotes:

  • Clean energy sourcing: Lowering the carbon footprint per compute cycle.
  • Reduced operational waste: Liquid cooling eliminates water-heavy chillers and supports heat reuse strategies.
  • Energy‑efficient GPU compute that aligns with global sustainability goals.

Further environmental innovation comes via cooperation with Chemours, implementing two-phase direct-to-chip and immersion cooling using Opteon dielectric fluids. These systems cut cooling energy consumption by up to 90%, lower TCO by 40%, and support high compute densities with minimal environmental impact

Scalability & Speed: These infrastructure designs allow rapid deployment of massive AI clusters with fewer physical rack constraints and considerable reduction in time‑to‑online deployment.

Impact on AI Cloud & Competitive Positioning

DataVolt’s GPU‑powered campuses are purpose-built for advanced AI:

  • Generative AI & Large Model Training: Blackwell GPUs and high-throughput interconnects support massive neural networks with smooth scalability.
  • Inference at Cloud Scale: The high throughput, energy-efficient infrastructure supports demanding, real-time AI inference workloads.
  • Global AI Infrastructure Leadership: Positioned as one of the few providers offering U.S.-sourced GPU infrastructure at hyperscale and sustainable scale.

Moreover, DataVolt’s collaboration with Recogni Inc. enables integration of low-power, log-math inference silicon alongside NVIDIA systems to further boost energy efficiency and performance-per-watt across workloads

DataVolt – Sustainable Data Center Operator