The Indian AI infrastructure opportunity
Why every Indian AI startup we know is overpaying for compute, and what changes when domestic capacity comes online. Market analysis with real customer data.
The dollar problem
Every Indian AI founder I've met in the last two years has the same complaint: they pay USD-denominated cloud bills out of INR revenue. When the rupee weakens 4% against the dollar — as it has, on average, every year for the last decade — their compute cost rises 4% with no corresponding revenue lift. They're shorting their own currency every month they run on a hyperscaler.
Aggregate this across the ecosystem: roughly $2.4B/year in compute spend by Indian AI companies, paid in dollars, draining FX reserves to fund a service that doesn't materially differ from what could be operated domestically.
The talent problem (which is actually a hardware problem)
The "Indian AI talent shortage" narrative is incomplete. We have plenty of ML researchers — IITs, IIITs, and a healthy diaspora. What we don't have is enough people who've shipped GPU infrastructure at scale. That's because there hasn't been GPU infrastructure at scale to ship on. AWS in Mumbai is rented compute; you don't learn how a fabric works by paying someone else to operate it.
Building domestic compute is a flywheel: more capacity creates more jobs, those jobs train more engineers, those engineers build more capacity. We're at the start of this flywheel; it'll take five years to fully spin up.
What we see across 87 customers
Patterns from our customer base:
- The "AWS-first" startup. Series A, ₹50-200L/month cloud bill, 60% of which is GPU. Fundraising next round to pay for the next 18 months of compute.
- The "compliance-blocked" enterprise. A bank or hospital that wants AI but can't legally put data on a US-headquartered cloud. They've been stuck for 18 months waiting for a workable answer.
- The "scrappy researcher. A two-person team training a 7B model on free Colab credits, then on Kaggle, then on a hosted notebook service, never able to commit to a real production stack because it costs more than they earn.
All three convert to domestic infrastructure within 90 days of finding out it exists. The first because of cost. The second because of compliance. The third because of price-stability and predictability.
The market sizing
Conservative estimates we work with internally:
- Indian AI compute spend 2026: ~$2.4B, of which 80% is on hyperscalers.
- Realistic addressable market for domestic providers in 2026: ~$600M.
- By 2028, with current growth: $3-5B, with domestic share rising to 40%+.
That's a roughly 6× expansion over two years. Even modest market share at our scale of pricing means a meaningful slice of a fast-growing pie.
What needs to be true
For domestic infrastructure to win share, three things have to hold:
- Reliability has to match. Customers will tolerate 60% lower price, but not 60% lower uptime. Three-nines minimum, four-nines for enterprise. We hit 99.94% over the last 90 days; not yet good enough for the most demanding workloads.
- Hardware availability. A100s are still allocated globally on a friendship-and-favor basis. Indian providers need direct relationships with NVIDIA, OEMs, and increasingly with Chinese alternatives (sensitive politically; we won't go there for now).
- Domain expertise. Cloud isn't just compute — it's the operational know-how stacked on top. Indian providers have to invest heavily in DevOps, compliance, and customer success teams. Hiring sales people doesn't count.
The structural advantages
What domestic providers have that hyperscalers don't:
- Currency match. INR revenue, INR costs. No FX risk for our customers.
- Time zone match. When a Mumbai customer pages on-call at 3am, our team is awake. AWS support is in Manila on rotation; quality varies dramatically.
- Data residency by default. DPDP Act compliance is built-in for us. For AWS customers, it's a procurement project that takes 6 months.
- Lower cost base. Indian engineers cost less than American engineers. Our gross margins can be smaller and we still profit.
- Pricing flexibility. We can run experiments hyperscalers literally cannot — like unlimited egress, which would bankrupt them at their scale.
Why I think the next five years will be transformative
Three converging factors:
- Open-source models keep catching up. Llama-3 405B is competitive with GPT-4 on many tasks. The gap to closed models narrows quarterly. As that happens, customers shift from API-only consumption to self-hosted, and the importance of who-runs-the-GPUs grows.
- India's data localization push. RBI rules already require certain financial data to stay onshore. DPDP extends that pattern. Every year, more workloads become legally constrained to domestic infrastructure.
- Domestic capital availability. Indian VC funds raised $25B+ over the last three years. They want to fund Indian AI champions. Those champions want to scale on Indian infrastructure. The flywheel is starting.
Practical advice for founders right now
- If you're paying more than ₹3L/month on AWS GPU instances, get a quote from us (or any domestic alternative). Even if you don't switch, the negotiation leverage will save you 15-20% off your AWS bill.
- Architect with portability in mind. Use OpenAI-compatible APIs even if you're using OpenAI today. Use Kubernetes manifests, not AWS-specific managed services. The ability to migrate is itself a savings.
- Build relationships with the local infra ecosystem now. The pricing and capacity available to early customers two years from now will reflect early loyalty today.
- Treat compute strategy like FX hedging. Concentration risk in one provider, in one currency, is a board-level concern.
The bigger thesis
India spent the last decade exporting talent to American cloud companies. The next decade will be about building the equivalent infrastructure here, profitably, at scale, in INR. Glixy is one company in that wave; there will be others. The faster the wave swells, the better the outcome for every Indian AI founder who would otherwise be paying tribute to Seattle.