The impact of AI on hardware and datacenters

The impact of AI on hardware and datacenters

Most discussions about AI focus on models, tools, and applications. But in practice, the impact of AI starts much lower in the stack — with infrastructure.

Servers, storage, networks, and data centers are under growing pressure as AI workloads scale. These systems require more power, more cooling, faster data movement, and larger storage capacities than traditional workloads.

Let's take a look at the numbers. According to the International Energy Agency (IEA), in a high-growth scenario, AI workloads could consume 200–400 TWh of electricity in data centers by 2030, accounting for 35–50% of total data center ****electricity use. For comparison, in 2023, AI workloads consumed 10–50 TWh, representing 5–15% of total data center demand (Energy and AI, 2025).

In this article, I'd like to share a practical view of how AI is influencing hardware today: the challenges teams face, the technology shifts already happening, and what will likely shape infrastructure decisions in the next three to five years.

Hardware and infrastructure challenges in the age of AI

Based on the experience of our DCOps department, scaling hardware isn’t straightforward. There are multiple factors beyond buying servers...

I'd like to outline four recurring challenges shaping decisions and priorities in AI infrastructure:

  1. Infrastructure limits

The first barrier isn’t the servers themselves, but space, available kilowatts, and cooling capacity. According to McKinsey, total AI capacity in data centers will grow 3.5 times, from 44 GW in 2025 (≈54% of total 82 GW) to 156 GW in 2030 (≈70% of total 219 GW).

Limited capacity increases hosting costs and makes electricity a larger part of total ownership. So, scaling up isn’t just “buy more hardware”. Sites and power must be planned as strategic resources.

  1. Component shortages and price volatility

AI workloads have shifted supply chain demand, making standard components harder to get and more expensive. Memory and storage are most affected: RAM, SSDs, and enterprise storage that weren’t problematic before are now in short supply.

As a result, equipment takes longer to arrive, and budgeting in advance has become harder.

  1. Short-lived quotes and tricky budgeting

Hardware quotes can become outdated in just a few weeks or even days. Frequent price changes make it harder to plan capital expenditures reliably. Teams have to stay flexible, revisiting budgets and procurement plans more often than before.

  1. Buy-ahead strategies with side effects

To avoid higher costs or delays in the near future, companies increasingly purchase equipment in advance. While this ensures the availability of critical components, it also brings extra costs for storage, requires additional space, and ties up working capital that could be used elsewhere.

Besides these operational challenges, there have been noticeable shifts in hardware technology that deserve attention:

  • Higher power density has become a new challenge for server design

AI workloads are increasingly deployed in high-density rack configurations. This rise in power per rack means teams must rethink how servers are arranged, how cooling is delivered, and how hardware is connected to ensure consistent performance under denser setups.

  • Networks are evolving – high-speed interconnects become standard

AI and modern distributed workloads are driving the need for faster, low-latency networks. High-speed interconnects are no longer optional for top-tier clusters – they’re a baseline requirement. This shift impacts how network hardware is deployed, from switching and optics to cabling and overall design. The goal is to ensure data moves quickly and reliably in high-performance environments.

These hardware and infrastructure shifts are already driving major investments. IDC estimates that spending on AI infrastructure, including compute and storage, reached $82 billion in Q2 2025 (a 166% year-on-year increase). And this is just the beginning: by 2029, spending could total $758 billion.

So, the changes will continue, and to be ready for them in advance, let’s look at how hardware is likely to evolve in the next few years.

How AI will shape hardware in the next 3–5 years

From my perspective, several trends are likely to influence how companies plan, deploy, and procure infrastructure in the coming years:

Infrastructure constraints will remain a key factor

Limits around power, cooling, and available space in data centers will continue to influence infrastructure strategies. AI workloads make access to electricity and the right data center location almost as critical as the hardware itself.

As a result, long-term capacity planning, energy efficiency, and careful site selection will become increasingly important.

Performance competition will extend beyond processors

The race for performance will increasingly depend not only on CPUs or GPUs, but also on memory, storage, and networking.

The main goal will be to “feed” compute faster: move data more quickly across the network, read and write data faster in storage systems, and process data efficiently in memory. These requirements directly affect the servers and AI clusters' design.

Procurement will become a more strategic function

Hardware purchasing is gradually turning into a financial and planning discipline rather than a purely operational task. Organizations are already adopting practices such as framework agreements with price indexation, multi-sourcing strategies, and standardized configurations to reduce supply risks.

Buffer stocks for critical components are also becoming more common. At the same time, companies are increasingly exploring the market for used and refurbished equipment.

In practice, this means closer coordination between procurement, capacity planning, and finance – something that will likely become standard for companies scaling AI infrastructure.

In summary

Considering the significant changes already happening and those on the horizon, it’s clear that companies should start planning capacity as early as possible.

Businesses should build flexible architectures and coordinate procurement closely with engineering and finance. Doing this will provide a clear advantage, as in many cases, the success of AI projects will depend not just on the quality of models but on how well the underlying hardware ecosystem is prepared to support them.