STRATEGY

The Indian AI infrastructure opportunity

Why every Indian AI startup we know is overpaying for compute, and what changes when domestic capacity comes online. Market analysis with real customer data.

VK Vikram Krishnan · CEO 14 min read · 26 Feb 2026

The dollar problem

Every Indian AI founder I've met in the last two years has the same complaint: they pay USD-denominated cloud bills out of INR revenue. When the rupee weakens 4% against the dollar — as it has, on average, every year for the last decade — their compute cost rises 4% with no corresponding revenue lift. They're shorting their own currency every month they run on a hyperscaler.

Aggregate this across the ecosystem: roughly $2.4B/year in compute spend by Indian AI companies, paid in dollars, draining FX reserves to fund a service that doesn't materially differ from what could be operated domestically.

The talent problem (which is actually a hardware problem)

The "Indian AI talent shortage" narrative is incomplete. We have plenty of ML researchers — IITs, IIITs, and a healthy diaspora. What we don't have is enough people who've shipped GPU infrastructure at scale. That's because there hasn't been GPU infrastructure at scale to ship on. AWS in Mumbai is rented compute; you don't learn how a fabric works by paying someone else to operate it.

Building domestic compute is a flywheel: more capacity creates more jobs, those jobs train more engineers, those engineers build more capacity. We're at the start of this flywheel; it'll take five years to fully spin up.

What we see across 87 customers

Patterns from our customer base:

All three convert to domestic infrastructure within 90 days of finding out it exists. The first because of cost. The second because of compliance. The third because of price-stability and predictability.

The market sizing

Conservative estimates we work with internally:

That's a roughly 6× expansion over two years. Even modest market share at our scale of pricing means a meaningful slice of a fast-growing pie.

What needs to be true

For domestic infrastructure to win share, three things have to hold:

  1. Reliability has to match. Customers will tolerate 60% lower price, but not 60% lower uptime. Three-nines minimum, four-nines for enterprise. We hit 99.94% over the last 90 days; not yet good enough for the most demanding workloads.
  2. Hardware availability. A100s are still allocated globally on a friendship-and-favor basis. Indian providers need direct relationships with NVIDIA, OEMs, and increasingly with Chinese alternatives (sensitive politically; we won't go there for now).
  3. Domain expertise. Cloud isn't just compute — it's the operational know-how stacked on top. Indian providers have to invest heavily in DevOps, compliance, and customer success teams. Hiring sales people doesn't count.

The structural advantages

What domestic providers have that hyperscalers don't:

Why I think the next five years will be transformative

Three converging factors:

  1. Open-source models keep catching up. Llama-3 405B is competitive with GPT-4 on many tasks. The gap to closed models narrows quarterly. As that happens, customers shift from API-only consumption to self-hosted, and the importance of who-runs-the-GPUs grows.
  2. India's data localization push. RBI rules already require certain financial data to stay onshore. DPDP extends that pattern. Every year, more workloads become legally constrained to domestic infrastructure.
  3. Domestic capital availability. Indian VC funds raised $25B+ over the last three years. They want to fund Indian AI champions. Those champions want to scale on Indian infrastructure. The flywheel is starting.

Practical advice for founders right now

The bigger thesis

India spent the last decade exporting talent to American cloud companies. The next decade will be about building the equivalent infrastructure here, profitably, at scale, in INR. Glixy is one company in that wave; there will be others. The faster the wave swells, the better the outcome for every Indian AI founder who would otherwise be paying tribute to Seattle.


📞 Move your stack to India →

Related: Why GPU clusters in India cost 60% less