The Wrong Scale
At the Raisina Dialogue, Nandan Nilekani laid out a view of AI strategy for India that sounded practical on the surface: models are becoming commodities, there are plenty of choices, deployment matters more than model ownership and sovereignty is not the real issue right now.
I understand why this sounds persuasive. Aadhaar and UPI changed how a billion people interact with the state. That track record earns respect.
But I think this is the wrong scale of thinking.
I am a product consultant. I use AI every day to prototype ideas, validate concepts with working systems, and move from discussion to demo quickly. I switch between Claude Code, Codex, Kimi Code and other tools depending on the job. For me, that flexibility is rational. If one model gets worse or too expensive, I can adapt.
But that logic works because I am one person.
A nation is not one person. A nation does not switch models the way a solo developer switches tools. A nation would be switching the intelligence layer underneath healthcare, agriculture, education, defense, welfare and governance systems that serve 1.4 billion people.
This is not just a bigger version of the same problem. It is a different kind of problem.
That is the core mistake in this way of thinking. It takes what is rational for an individual or even for a startup and stretches it to national strategy as if the consequences stay the same. They do not.
Models Are Not Commodities
The claim that AI models are becoming commodities sounds neat, but it collapses on contact with reality.
A commodity is interchangeable. It has many suppliers, low switching costs, transparent quality and limited strategic dependence. Frontier AI does not look like that. It has scale effects, compute concentration, data advantages, cloud lock-in, safety policy lock-in and ecosystem gravity. The most capable models do not simply compete on price. They shape standards, workflows and expectations.
Treating models as commodities is like saying fighter jets are commodities because multiple countries make them. It ignores the fact that the country with the F-35 dictates terms.
Intelligence is not a vendor utility. It is capital. And unlike previous economic shifts, this is not a layer we can casually outsource and expect to climb above later.
This is why this fourth inversion is final. When hands became obsolete, we pivoted to our minds. But when our minds are out-competed by a form of labor that does not need to eat, sleep, or live, there is nowhere left to pivot. We are not just facing a more efficient competitor; we are facing a different category of economic life.
-- Emad Mostaque, The Last Economy
That is why this matters. Models are not just software you rent. They are productive assets, closer to ports, payment rails, chip fabs, or factories. A country that does not control enough of this layer will be shaped by those who do.
Choice Is Not Sovereignty
A big part of the comfort here comes from the idea that there are enough models in the market to avoid dependency. Five from the US. Five from China. Open models. Commercial models. Plenty of choice.
But choice is not sovereignty.
The American models are commercial. You pay, you use, you accept their terms, their safety policies, their rate limits, and their governance. If relations shift, access can be restricted.
The Chinese models are presented as the fallback, but that confidence is strange. India has already shown deep institutional distrust of Chinese technology. TikTok was banned. Chinese devices and chips are treated with caution. Border tensions persist. Yet suddenly Chinese AI is supposed to be a safe comfort because some of it is open-weight.
That is false comfort.
Open today does not mean open tomorrow. Cheap today does not mean dependable tomorrow. And even before you get to geopolitics, a frontier model from Anthropic and an open-weight model from a Chinese lab are not interchangeable. Different training data. Different alignment. Different safety tuning. Different refusal behavior. Different performance in Indian languages and workflows.
Switching between them is not like changing browsers. It is not even like changing cloud vendors. It means rebuilding trust, revalidating behavior, retraining institutional workflows and absorbing the risk of failure across systems that people depend on.
A nation cannot treat that as optional complexity. It is part of the cost of dependence.
Useful Is Not the Same as Adequate
This was one of the most frustrating parts of the talk: the suggestion that even if model progress stopped today, current models could create value for ten years.
That may be technically true in a narrow sense. Old software can still be useful. Old databases still store data. Old OCR systems still read printed text.
But that is not the right bar.
There is a difference between saying a model is useful today and saying it is good enough to sit under national infrastructure for the next decade. The first statement is reasonable. The second is reckless.
A ten-year-old AI model serving medical triage, agricultural risk advisory, legal assistance, multilingual citizen services, or defense logistics is not a stable national asset. It is an aging liability.
The world does not stand still. Laws change. Schemes change. Diseases change. Crops change. Threat models change. Languages evolve. A model is a compressed representation of the world up to a point in time. Retrieval can help, but retrieval does not replace architecture, alignment, priors, safety behavior or reasoning quality.
A stale model is also a security problem. Attackers learn its weaknesses. Prompt injection patterns become known. Failure modes become predictable. The system may continue to function while quietly becoming less safe, less competitive and less efficient.
And the market itself contradicts the comfort. Frontier labs are racing to release better reasoning systems, better coding models, better multimodal systems, lower inference costs and stronger agents. They are not behaving as if current capability is sufficient for ten years. Countries should not either.
The danger of this argument is not just technical. It is political. It gives institutions permission to delay the hard work of capability-building. Why invest in domestic models? Why build evaluation capacity? Why build Indian-language datasets? Why push on compute? Why develop sovereign inference? The sentence becomes a permission slip for strategic laziness.
The real question is not whether today's models can still produce value in ten years. The real question is whether India can afford to let its intelligence layer age while other powers keep upgrading theirs.
Deployment Is Hard. That Is Exactly Why Ownership Matters.
The other strong argument is that technology is only thirty percent of the problem and seventy percent is organizational. Integrating fragmented data, building trust architectures, coordinating institutions and creating reliable public systems is the real work.
That part is true.
In fact, it is more than true. It is brutally true.
Getting Indian institutions to move together is not a one-quarter problem. It is not even a one-year problem. It is years of coordination, workflow redesign, political alignment, procurement change, operational trust-building, and public adaptation.
But that is exactly why the thirty-seventy framing becomes dangerous when the model is rented.
If India is building on top of its own sovereign model capability, then yes, most of the effort will be organizational. The model is the foundation and the hard work is everything built on top of it.
But if India is building on top of someone else's model, the equation changes. All that institutional work, all that coordination, all that trust-building, all that political effort becomes exposed to the moment someone else changes terms, changes access, changes pricing or changes behavior.
That is why the model layer is not a utility. It is not just plumbing. It is the foundation. You do not build a house on someone else's land and call it yours.
The Sequencing Problem
The deploy-now-build-later argument sounds pragmatic. Use what exists today. Create value now. Build deeper capability later when the economics are better.
The problem is that infrastructure is path-dependent.
The first generation of systems shapes procurement, standards, evaluation, workflows, talent allocation and institutional habit. If those systems are built on foreign models, later sovereign alternatives do not compete on a clean slate. They compete against installed dependencies.
That is why sequencing is strategy.
If the national message is that models are commodities and ownership can wait, talent flows accordingly. Strong engineers go into integration, wrappers, orchestration and applied services. Fewer people go into model-building, evaluation, systems research, training infrastructure and long-horizon capability. By the time the country decides ownership matters, the bench is weaker and the gap is larger.
This is not abstract. Mostaque's framing of fast and slow systems is useful here. Visible deployment wins come from the fast layer. But research capacity, institutions, talent pipelines and technical sovereignty are slow-layer assets. If you optimize only for immediate diffusion, you starve the layer that determines whether the country can adapt later.
That is how short-term pragmatism becomes long-term dependence.
The Aadhaar Contradiction
This is the hardest part to ignore.
Nandan Nilekani built Aadhaar because identity infrastructure was too foundational to outsource. He understood that a nation cannot rent the systems that verify who its citizens are. He fought to build sovereign rails because the underlying layer mattered.
Now the position being advanced is that intelligence, which is becoming just as foundational, can be rented.
That contradiction is not minor. It is the whole argument.
Aadhaar exists because someone refused to say, "identity is too hard, let us just use what is available and build our own capacity later." The logic then was sovereignty at the foundational layer. The logic now seems to be convenience at the foundational layer.
That is a downgrade in strategic ambition.
There is also a second contradiction. The claim that political support follows visible results does not match the Aadhaar and UPI story. Those systems did not emerge because politicians casually observed local wins and then got excited. They happened because there was top-down political will behind them. The doctrine came before the diffusion.
That matters for AI too. If the state treats sovereign capability as something that can wait until applications prove themselves, it is already too late. Foundational infrastructure requires doctrine early, not enthusiasm later.
India Is Not Just Using AI. India Is Training It.
India is now one of the biggest AI markets in the world. That is usually framed as good news. In one sense, it is. It means Indian developers, students, institutions and businesses are learning fast.
But there is another side to it.
Every Indian interaction with these systems becomes training fuel. Code. Language patterns. Regional edge cases. Legal phrases. Medical phrasing. Agricultural queries. Multilingual usage. All of it helps make the dominant models better.
So India is not just consuming AI. India is helping train the models it does not control.
That creates a double extraction. Data goes out. Better models come back for rent. India pays twice: once with usage and once with money.
This is not a neutral market dynamic. It is a self-reinforcing flywheel.
The bigger the network, the more data. The more data, the better the AI. The better the AI, the more users it attracts. It is not just a virtuous cycle. It is an accelerating spiral that approaches singularity.
-- Emad Mostaque, The Last Economy
That is why scale alone is not victory. Being the second-largest market is not automatically power. If the intelligence layer stays foreign, market size can simply mean India becomes the world's largest downstream user base and one of its largest unpaid training grounds.
What the Talk Gets Right
This is not a case for isolation.
It is correct that deployment is hard. It is correct that India has real capital and compute constraints. It is correct that institutions matter. It is correct that India should use every strong model available right now.
That is not the disagreement.
The disagreement is with the leap from those truths to the conclusion that ownership can wait.
India should absolutely deploy on top of the best systems available. But deployment without capability-building is not strategy. It is usage.
It is also correct to care about jobs, inclusion and real-world value. But if the model layer is foreign, the country is directing applications, not technology. It can optimize downstream workflows while remaining exposed at the upstream layer that matters most.
The Real Question
The debate is often framed as if the choice is between deployment and sovereignty. That is the wrong framing.
The real choice is whether India wants to build its AI future on rented foundations.
Sovereign does not mean isolated. It means capable.
It means India should use every strong foreign model it can access while also building enough domestic capability to avoid strategic blindness if access changes. Enough compute. Enough evaluation capacity. Enough Indian-language infrastructure. Enough model-building talent. Enough inference under Indian jurisdiction. Enough fallback capacity in critical sectors.
This is not about autarky. It is about insurance.
You do not buy fire insurance because you expect a fire. You buy it because you cannot tolerate the consequences of being wrong.
Final Thought
The push for deployment is necessary. But it is not sufficient.
A nation cannot build long-term systems on layers it does not control.
Use what exists. Move fast. Build real things.
But do it on a path that leads to ownership, not dependency.
