Quantcast
Channel: Artificial Intelligence | Network World
Viewing all articles
Browse latest Browse all 773

Data center spending to top $1 trillion by 2029 as AI transforms infrastructure

$
0
0

Massive global demand for AI technology is causing data centers to increase spending on servers, power, and cooling infrastructure. As a result, data center CapEx spending will hit $1.1 trillion by 2029, up from $430 billion in 2024, according to Dell’Oro Group.

Much of this growth is due to AI.

Enterprises are now spending about 35% of their data center CapEx budgets on accelerated servers optimized for AI, up from 15% in 2023, says Dell’Oro analyst Baron Fung. And the proportion will increase to 41% by 2029.

For hyperscalers, the AI investment is even higher, as they are already spending 40% of their budgets on accelerated servers. And these servers cost quite a bit more than traditional ones, he says, which typically cost around $7,000 or $8,000. “AI servers can be $100,000 to $200,000, especially when equipped with the latest Nvidia CPUs,” Fung tells Network World.

As a result, just four companies – Amazon, Google, Meta, and Microsoft – will account for nearly half of global data center capex this year, he says.

“Initially, I would expect most AI workloads will be in the public cloud, as opposed to on premise, given the high cost and potentially low utilization of AI infrastructure in private data centers,” says Fung. “As enterprises get a better sense of AI workload utilization, they may bring the workloads back on premises.”

His projections account for recent advances in AI and data center efficiency, he says. For example, the open-source AI model from Chinese company DeepSeek seems to have shown that an LLM can produce very high-quality results at a very low cost with some clever architectural changes to how the models work.

These improvements are likely to be quickly replicated by other AI companies. “A lot of these companies are trying to push out more efficient models,” says Fung. “There’s a lot of effort to reduce costs and to make it more efficient.”

In addition, hyperscalers are designing and building their own chips, optimized for their AI workloads. Just the accelerator market alone is projected to reach $392 billion by 2029, Dell’Oro predicts. By that time, custom accelerators will outpace commercially available accelerators such as GPUs.

The deployment of dedicated AI servers also has an impact on networking, power and cooling. As a result, spending on data center physical infrastructure (DCPI) will also increase, though at a more moderate pace, growing by 14% annually to $61 billion in 2029. 

“DCPI deployments are a prerequisite to support AI workloads,” says Tam Dell’Oro, founder of Dell’Oro Group, in the report.

The research firm raised its outlook in this area due to the fact that actual 2024 results exceeded its expectations, and demand is spreading from tier one to tier two cloud service providers. In addition, governments and tier one telecom operators are getting involved in data center expansion, making it a long-term trend.

Networking will be heavily impacted by AI. The Ethernet network adapter market for supporting back-end networks in AI compute clusters is projected to grow at a 40% compound annual growth rate by 2029.

AI is also very power-hungry. According to Dell’Oro, the average rack power density today is around 15 kilowatts per rack, but AI workloads require 60 to 120 kilowatts per rack.

Other research supports these growth projections. According to numbers from research firm IDC, for example, AI-related data center energy consumption is projected to grow by 45% a year, reading 146 Terawatt hours by 2027.

According to McKinsey, a 30-megawatt data center was considered large ten years ago. “Today, a 200-megawatt facility is considered normal,” McKinsey says in a December report.

Due to AI, average power densities have more than doubled in the past two years, from 8 kilowatt per rack to 17, and are expected to rise as high as 30 by 2027, McKinsey predicts. Training an AI model like ChatGPT can consume more than 80 kilowatts per rack, and NVIDIA’s latest chip and its servers requires rack densities of up to 120 kilowatts.

Meanwhile, current air-cooling systems have an upper limit to their effectiveness of around 50 kilowatts per rack. As a result, data center operators are beginning to embrace liquid cooling. According to an IDC report released in September, half of organizations with high-density racks are now using liquid cooling as their predominant cooling method.

For data centers in general, 22% now use liquid cooling, with another 61% considering it, according to a 2024 Uptime Institute report. For large data centers – those with 20MW or greater – 38% are already using direct liquid cooling.

Last year was a transition year for liquid cooling, says Lucas Beran, director of product marketing at Accelsius, a liquid cooling company. Until recently, Beran was the research director of data center physical infrastructure at Dell’Oro.

In 2023, the data center physical infrastructure market had a strong year as enterprises finished up the pandemic-induced digitization plans. “As we started to move into 2024, that’s where design changes started to materialize in new data center builds,” he tells Network World. “If you’re building a new facility, you have to think about liquid cooling even if you don’t need it from day one. You have to future-proof.”

This year, he says, liquid cooling is going to be all about scaling. “We figured out what we want to do, we have a pretty good idea how to do it, and we’re going to start deploying the infrastructure at scale,” he says. “Confidence will grow in these liquid cooling deployments, which will help the industry accelerate adoption.”


Viewing all articles
Browse latest Browse all 773

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>