top of page

DeepSeek & the Energy Question No One is Asking

  • Writer: Adrien Book
    Adrien Book
  • Jan 29
  • 5 min read

DeepSeek and data centers

Over the past few days, the AI industry has been in an uproar over a Chinese AI lab called DeepSeek. And for good reason. The company has just released a state-of-the-art Large Language Model “trained” for just $6 million, with just 2,000 Nvidia chips (the “R1”). That’s a fraction of the billions spent by hyperscalers like OpenAI, Google DeepMind, and Anthropic. 

That low cost is coupled with the reveal that, since going live on January 20, DeepSeek-R1 has earned good marks for its performance… and rivals its larger competitors. There are also numbers flying around claiming it’s 27x times cheaper to operate than ChatGPT (per million token). 

In short? We’re not just seeing yet another LLM launch. It’s a paradigm shift.

If DeepSeek can achieve GPT-4o-level performance at 1/100th of the cost, the implications extend far beyond model training. They force us to rethink everything about AI economics, infrastructure, and sustainability. That’s why the Nasdaq fell by 3% on Monday, driven by losses of chip maker Nvidia of nearly 17%.

But, amid all the debates over model quality, valuations, and the geopolitical AI race, some crucial questions are being ignored: 

  • What does this mean for the hundred of data centers currently being built expecting very high compute density… that may now never come?

  • Why have we always measured data center efficiency in terms of power waste, but never questioned how efficiently IT actually uses that power?

  • What if the real AI bottleneck isn’t hardware availability — but the inefficiency of the software itself?

The Wrong War: Why Data Centers Focused on the .45, Not the 1

For years, the primary efficiency metric for data centers has been Power Usage Effectiveness (PUE). A PUE of 1.45 (the Middle East average) means that for every 1 kWh of IT power, an additional 0.45 kWh is consumed by air conditioning, lighting, toilets, etc.

The industry has poured billions into squeezing that .45 down, making buildings more efficient (“optimised rack positioning”, if you’re nasty), reducing overheads, and even investing in exotic liquid cooling systems to push PUE closer to 1.1, or even 1.05.

But myself — and many others — have long argued that this obsession with overheads misses the bigger picture. What if the real inefficiency isn’t in the cooling — but in the computing itself? What if, instead of cutting the .45, we started cutting the 1? Sure, that would lead to higher PUEs, that’s just math… but AI would be overall less energy hungry, at a time when its consumption is making headlines.

DeepSeek’s breakthrough proves this is not just theoretical:

  • Instead of brute-forcing AI development with ever-larger clusters of H100 GPUs, DeepSeek optimized every layer of the process

  • Using Mixture-of-Experts (MoE), load balancing innovations, and multi-head latent attention, they reduced compute overhead by orders of magnitude

  • They trained their latest model for under $6 million, using a relatively inefficient, U.S.-sanctioned H800 GPU cluster — and still achieved near GPT-4o levels of performance

In other words: DeepSeek squeezed the 1.

This raises a fundamental challenge for every AI lab, hyperscaler, and data center operator: if models can be made 100x more efficient, why are we still spending billions on dense, GPU-heavy clusters and extreme cooling solutions?

What Happens Next?

The AI industry runs on expensive, real-world infrastructure. Walls, electrical fit-out, cables, etc. Nvidia, OpenAI, Microsoft, Amazon, and Meta are spending hundreds of billions to scale up AI data centers — because their assumption has always been that better AI requires more compute.

DeepSeek R1 proves that assumption wrong, with massive consequences for the industry.

Nvidia’s Stranglehold Weakens

Nvidia has thrived because AI demand has outpaced supply. But if models become far more efficient, then:

  • Demand for expensive GPUs drops

  • Alternatives (AMD, Huawei Ascend, in-house AI chips) become viable

  • The Nvidia premium disappears

Today, Nvidia controls 80%+ of the AI chip market. But DeepSeek just demonstrated that model efficiency can offset hardware limitations, and that shifts the balance of power.

This shouldn't be over-stated; Nvidia is still the OG… but their foundation is more shaky today than it was yesterday.

The Data Center CAPEX Boom is Unsustainable

Tech giants are on track to spend over $2 trillion on AI infrastructure by 2030 ($250Bn in 2025). But DeepSeek’s success raises a hard truth: If AI efficiency keeps improving, most of that infrastructure may become obsolete before it’s even built.

Right now, data center expansions are based on 2023 assumptions about model scaling. But:

  • If training costs drop from $100M+ to single-digit millions, do we need such aggressive GPU cluster expansions?

  • If MoE and low-precision compute architectures become standard, does liquid cooling make sense?

DeepSeek may have just pulled the rug out from under the entire AI data center boom.

The AI Cost Structure Collapses

Right now, AI inference is expensive.

  • Every ChatGPT query costs 10x more energy than a Google Search.

  • Every LLM inference request burns through GPU memory and compute power.

But DeepSeek’s breakthroughs suggest that we could be approaching near-zero marginal costs for AI inference (going against the previous assumption that margins were going to become squeezed by Cost of Sales).

  • Open-source models trained using DeepSeek’s methods will flood the market

  • Enterprises will run AI models locally instead of paying OpenAI and Microsoft for API access

  • Cloud-based AI services will be forced to cut prices — drastically

This means AI could soon be embedded in nearly every consumer device at zero cost — and that has huge implications for AI business models.

The Final Question: What If AI No Longer Needs Hyperscalers?

DeepSeek just rewrote the AI economic playbook. 

And that raises the biggest question of all:

What happens when AI no longer depends on Big Tech’s trillion-dollar infrastructure?

For years, we assumed that AI’s future belonged to hyperscalers — Microsoft, OpenAI, Google, and Nvidia — because only they could afford the astronomical compute costs.

But if DeepSeek’s innovations continue, AI could shift away from centralized cloud platforms entirely.

Imagine:

  • A future where every company runs its own AI locally — without needing OpenAI or AWS

  • A world where inference costs are close to zero, making AI-powered assistants, copilots, and tools essentially free

  • A collapse in demand for hyperscale data centers, because efficient AI doesn’t need them

As is often the case when such questions are asked, I am reminded of 2001. There were many, many losers out the dotcom bubble. But the winners are the giants that walk among us today… and are some of the most powerful economical entities the world has ever seen. I see no reason why this would not happen here again, as AI, just like the Internet, clearly has transformative potential.

In fact, it may be a net positive that companies are spending so much to build an AI infrastructure. The dotcom bubble was rough, sure, but the era’s over-build led to a glut in fibre that took almost a decade to backfill. Cheaply. The corpses of failed companies are the backbones of the digital world we know today.

For the last five years, AI progress has been measured by how many GPUs you could stack together. DeepSeek just proved that a radically different approach is possible.

The real AI revolution won’t be about bigger data centers — it will be about smarter AI. The hyperscaler business model is at risk. If models become cheap and open-source, centralized AI platforms will struggle to justify their costs. Instead of pouring billions into liquid cooling, we should be redesigning AI models to use less power in the first place.

I’ve long said that data center CAPEX is unsustainable past 2030. Maybe that number was 2025 (more or less — let’s not get carried away).

The future of AI belongs to efficiency — not brute-force scale.

Good luck out there.

 
 
 

42 Comments


Car rental Dehradun
Car rental Dehradun
5 days ago

Exploring the beautiful city of Dehradun becomes effortless and exciting with self drive car rental in Dehradun. This convenient service gives travelers the freedom to explore at their own pace, without relying on a driver’s schedule. Whether you’re heading to scenic Mussoorie, spiritual Rishikesh, or simply enjoying the charm of the city, self-drive cars offer complete flexibility and privacy.

With self drive cars in Dehradun, you can choose from a variety of vehicles—compact hatchbacks for city rides, spacious SUVs for hill trips, or luxury sedans for a touch of style. These vehicles are well-maintained, insured, and equipped with modern amenities for a smooth travel experience.

The biggest advantage of self-drive rentals is cost savings. You only pay for the car,…


Like

Airticketoffices
Airticketoffices
Aug 08

This is a fascinating piece—too often, we focus on the surface-level advances in AI like DeepSeek without diving into the massive energy costs behind them. It reminds me of how major infrastructure hubs like the Delta Airlines Miami Office are reevaluating their operational efficiency in light of rising energy demands and sustainability goals. Just as aviation is being pressured to cut emissions, the tech world needs to confront the energy footprint of its "invisible" operations. Thanks for raising this important question—it's time both industries look at smarter, cleaner innovations.

Like

reriv70981
Aug 04

Right, so AI's dot-com moment, eh? Hmmm, smells familiar. This whole GPU arms race... sound familiar? The current spending is absolutely wild. I remember back when my startup totally faceplanted during the previous tech boom. We were certain our incredibly complicated flash website was the future, but we couldn't see the Pako highway pitfalls. It reminds me that sometimes the best innovations come from the ashes. Hopefully, this AI frenzy leads to something useful, even if some big names eat dust along the way.


Like

Car rental Dehradun
Car rental Dehradun
Aug 04

Looking for a reliable self drive car rental in Dehradun? Experience hassle-free travel with our wide range of vehicles suitable for all kinds of trips. Whether you are a tourist planning to explore Mussoorie, a local needing a weekend getaway car, or an adventurer driving to Rishikesh, our self drive cars in Dehradun offer unmatched freedom and comfort.

Our car rental in Dehradun service includes hatchbacks, sedans, and SUVs available at affordable rates. Enjoy the flexibility to choose hourly, daily, or weekly rentals according to your convenience. With us, you can avoid the stress of hiring cabs repeatedly and travel at your own pace with complete privacy.

Like

Ride On wheels
Ride On wheels
Aug 04

Discover the joy of traveling at your own pace with self drive car rental in Dehradun from Ride on Wheels. Whether you’re exploring the city or planning a road trip to Mussoorie, Chakrata, or Rishikesh, our well-maintained cars give you the independence and comfort you deserve. Choose from a wide range of vehicles to match your style and budget, and enjoy easy booking, transparent pricing, and flexible rental durations. With no need to rely on public transport or drivers, you can enjoy every moment of your trip your way. Our doorstep delivery service ensures your car is ready when and where you need it. Ride on Wheels is committed to providing a safe, affordable, and enjoyable driving experience for all travelers.


Like

You may also like:

You may also like :

Thanks for subscribing!

Get the Insights that matter

Subscribe to get the latest on AI, innovative business models, corporate strategy, retail trends, and more. 

No spam. Ever.

Let's get to know each other better

  • LinkedIn
  • X
  • Instagram
bottom of page