Back to my writings

India's Practical Bet on Sovereign AI

The India AI Impact Summit 2026 in New Delhi made one thing clear. India is not trying to win the AI race the way the US or China are.

Instead of chasing the largest models or spending the most capital, the country is focusing on infrastructure and practical deployment. The goal is simple. Make AI usable across a country of 1.4 billion people. Not just inside research labs. Across real systems.

Compute as Shared Infrastructure

One of the largest announcements was the ₹10,372 crore IndiaAI Mission.

The plan treats compute as shared national infrastructure. Instead of leaving GPU access to a small group of well-funded companies, India is expanding a national compute pool that startups and researchers can access. The system currently operates with roughly 38,000 GPUs, with another 20,000 being added.

Access is also being subsidized. High-end GPUs are available for about ₹65 per hour, dramatically lowering the barrier to experimentation for local startups and research teams. This changes who gets to build.

A Different Strategy

The United States and China are competing largely on scale. They are investing in more chips, more capital, and bigger models.

India appears to be taking a different route. The emphasis is efficiency and localization. That means models designed for Indian languages, Indian infrastructure, and Indian use cases rather than global general-purpose dominance.

Tokenization illustrates the difference. Some Indian startups have developed tokenizers that use 1.4 to 2.1 tokens per word for Indian languages, while many global models require 4 to 8 tokens per word. At national scale, that efficiency difference directly affects cost.

India is also leveraging its Digital Public Infrastructure, including systems like Aadhaar. These platforms already operate at population scale, giving new AI services an existing distribution layer. That makes experimentation easier and deployment faster.

Local Models Are Emerging

Several Indian companies presented specialized models designed for specific domains or language environments. Some of the results are promising.

  • Fractal's Vaidya 2.0, a healthcare reasoning model, scored 50.1 on OpenAI HealthBench.
  • Sarvam AI's Vision model achieved 84.3 percent accuracy on complex Indian script OCR.
  • BharatGen's Param2 model uses a Mixture-of-Experts architecture, activating only 2.4 billion parameters per inference despite having 17 billion total.
  • Gnani.ai demonstrated a voice-to-voice conversational model capable of handling overlapping speech while preserving tone.

But it is important to keep these benchmarks in perspective. Benchmarks like HealthBench scores or OCR percentages are mostly vanity metrics for end users. People do not care about parameter counts or evaluation datasets. They care about speed and getting the right answer quickly. Even the comparisons themselves can be misleading. Saying a specialized Indian medical model outperformed GPT-5 on a medical benchmark is not really an apples-to-apples comparison. It is comparing a specialist to a generalist.

What matters is something else entirely. For product builders, these specialized systems open the door to routing tasks to smaller, highly efficient models rather than sending every query to an expensive general-purpose cloud model.

Instead of paying the massive cost of a generalized API for every request, products can route specific tasks to lower-cost engines designed for that task.

The benchmark score matters less than the architecture it enables.

The Real Constraint: Infrastructure

AI systems may be digital, but the infrastructure behind them is not.

Data centers require huge amounts of electricity and water. As AI adoption grows, both resources will come under pressure, especially in cities that already face shortages.

The real limits of AI growth may not be models. They may be infrastructure. There is also a workforce transition underway.

India's IT services industry will have to move toward AI-assisted development and agent-based engineering. The traditional outsourcing model is already changing. That transition will require retraining at scale.

What This Signals

India is not trying to outspend the United States or China.

It is trying to optimize for its own conditions. These include population scale, language diversity, and cost sensitivity. That approach favors efficiency, localization and shared infrastructure.

If the strategy works, India may demonstrate a different way to build national AI capability. The focus would be less on model size and more on practical systems.

But the outcome will depend on something far less glamorous than model benchmarks.

Power. Water. And the infrastructure required to support both.