U.S. Saudi Investment Forum in Washington, D.C.​ U.S. Saudi Investment Forum in Washington, D.C.​

Saudi supercluster gives xAI first access to 600,000 Nvidia GPUs

A Saudi-backed developer is moving ahead with one of the largest AI data centers planned anywhere, and Elon Musk’s xAI will be the first major customer to use it. The facility is being developed by Humain, a company backed by Saudi Arabia’s Public Investment Fund (PIF), and was announced at the U.S. Saudi Investment Forum in Washington, D.C.​

This new supercluster is expected to deploy about 600,000 Nvidia GPUs over time. In addition, the project is framed by Saudi officials as part of a wider national push to build AI capacity and draw global technology partners into the Kingdom.​

xAI secures priority access to massive compute

Humain has granted xAI early and priority access to this new infrastructure, giving Musk’s company a major long‑term source of compute outside the United States. The first phase of the data center is designed to provide about 500 megawatts of Nvidia GPU power, making it one of the largest AI‑focused builds publicly disclosed so far.​

Video: via U.S.-Saudi Investment Forum

Nvidia CEO Jensen Huang highlighted how unusual it is for a startup like xAI, which he jokingly described as having “approximately $0 billion in revenues,” to be at the center of a project of this size. He said the data center “is gigantic” and noted that the company behind it “is off the charts right away,” underscoring both the scale and the speed of the partnership.​

Multi-vendor hardware plan

Nvidia hardware will serve as the backbone in the initial rollout, yet Humain is structuring the project as a multi-vendor platform to reduce dependence on a single supplier and better match different workloads. As part of that plan, AMD will provide Instinct MI450 accelerators, with capacity expected to grow toward 1 gigawatt of power draw by around 2030 as deployment stages come online.​

Qualcomm is joining the buildout with its AI200 and AI250 data center processors, which are projected to support about 200 megawatts of additional compute once fully installed. Cisco will handle key parts of the networking and infrastructure layer, stitching together the mixed hardware stack so the various accelerators can operate within one cohesive environment.​

Musk’s comments on scale and future demand

During the announcement, Musk used his usual dry humor to comment on the extreme growth in compute needed for frontier AI models. He joked that scaling this Saudi supercluster up by a factor of one thousand “would be 8 bazillion, trillion dollars,” a tongue‑in‑cheek way to point to the rising cost of state‑of‑the‑art AI training runs.​

Even so, his remarks still underscored a serious point: as models grow larger and more complex, global demand for high-end GPUs and accelerators is climbing faster than traditional data center planning cycles. For xAI, early access to a 500‑megawatt class site backed by 600,000 Nvidia GPUs and a diversified accelerator mix could become a key asset in that race.

You may also like to read:

Quick reaction?

😀
0
😍
0
😢
0
😡
0
👍
0
👎
0

Join Our Tesla Owners Forum

Tesla Owners Forum

Leave a Reply

Your email address will not be published. Required fields are marked *

TeslaMagz