The Editorial Argument
The AI revolution is not unfolding as a distributed democratisation of intelligence — as its early advocates promised. It is consolidating as one of history’s most concentrated exercises of economic and informational power. A handful of corporations — a few large language model developers, three cloud infrastructure providers, two dominant chip manufacturers — now control the foundational infrastructure of the AI economy. The question the editorial poses is not whether AI is transformative, but who controls the transformation and who is accountable when it goes wrong.
The Concentration Problem
Model Layer
- Three companies (OpenAI/Microsoft, Google DeepMind, Anthropic) account for the overwhelming majority of frontier AI model development
- The computational and data requirements to train frontier models have made entry prohibitively expensive for all but the largest entities
- Open-source models (Meta’s LLaMA, Mistral) offer partial counter to concentration — but fine-tuning and deployment still require significant infrastructure
Infrastructure Layer
- Three cloud providers (AWS, Microsoft Azure, Google Cloud) provide the compute infrastructure for virtually all large-scale AI training and deployment
- NVIDIA controls ~80% of the AI chip market — its H100/H200 GPU dominance creates a supply chain chokepoint that even governments cannot easily bypass
- India’s domestic AI infrastructure (through India AI Mission and National AI Computing Infrastructure) is a meaningful but partial counter
Governance Layer
- The EU AI Act (2024) is the most comprehensive existing regulatory framework; India has no equivalent legislation
- The US has relied on voluntary commitments and executive orders — easily reversed
- China has its AI regulation framework — but does not apply democratic accountability principles
- The UN AI Advisory Body’s recommendations (2024) remain non-binding
India’s AI Governance Gap
India has made significant investments in AI development:
- IndiaAI Mission (2024): Rs 10,372 crore outlay; compute infrastructure, datasets, application development, skilling
- Bhashini platform: Multilingual AI for government services and translation
- AI for Agriculture, Health, Education applications under Digital India
- AIRAWAT: India’s AI computing cluster (launched 2023, expanding)
But governance — the regulatory framework that determines accountability when AI systems cause harm, discriminate, or concentrate power — is lagging. The Ministry of Electronics and IT (MeitY) has released AI advisories, not legislation. India’s data protection framework (Digital Personal Data Protection Act, 2023) provides partial protection for data use in AI training, but not for AI deployment decisions.
Three Risk Categories
1. Labour Market Disruption
The ILO and McKinsey Global Institute estimate AI could displace 300–800 million jobs globally by 2035. India, with its large services sector and significant white-collar employment in IT and ITES, faces specific risk in:
- IT services and BPO (coding, data entry, customer support — all automatable)
- Legal services (contract review, document analysis)
- Financial services (credit analysis, fraud detection — partial automation)
The editorial argues that disruption at this scale requires active policy: reskilling programmes, social protection reform, and governance of AI deployment in employment-related decisions (hiring, performance evaluation, termination).
2. Algorithmic Decision-Making Without Accountability
AI systems are increasingly used in consequential decisions: bail and sentencing (COMPAS-type systems in the US), loan approvals, welfare benefit eligibility, facial recognition in public spaces. These systems:
- Are often opaque (no explainability)
- May encode historical biases (training data reflects past discrimination)
- Have no redress mechanism when decisions are wrong
- Are treated as objective when they are not
India has rolled out facial recognition at airports, railway stations, and police surveillance without an adequate legal framework for challenging false identifications or discriminatory profiling.
3. Misinformation and Democratic Integrity
Generative AI enables the production of photorealistic synthetic media (deepfakes), personalised misinformation at scale, and AI-generated political messaging. The 2026 election cycle in multiple democracies is the first to operate in a high-capability AI environment. India’s IT Amendment Rules 2023 require platforms to identify AI-generated content — but enforcement remains weak.
What an AI Governance Framework Requires
The editorial proposes five principles:
- Transparency: AI systems used in public services or consequential private decisions must be explainable to affected individuals
- Accountability: A designated body (proposed: AI Regulatory Authority of India) must have powers to investigate, fine, and mandate rectification of harmful AI systems
- Non-discrimination: AI systems must not perpetuate discriminatory outcomes — with sector-specific standards for credit, employment, and law enforcement
- Human oversight: High-risk decisions (criminal justice, welfare eligibility, healthcare diagnosis) must retain human decision-making in the loop
- Interoperability with global standards: India’s framework should align with — and influence — international norms through G20, GPAI (Global Partnership on AI), and bilateral agreements
India’s Opportunity
India’s position is unusual: it is large enough to be a regulatory market-shaper (as the EU has been with GDPR and AI Act), technological enough to participate in frontier development (through IndiaAI Mission and IIT ecosystem), and diverse enough to demand AI systems that work for Hindi, Tamil, Bengali, and dozens of other languages and cultural contexts — not just English-language trained models.
This is an opportunity for India to set global AI governance standards for the Global South — if it acts now, before the technology is locked into patterns of deployment that become difficult to reverse.
UPSC Relevance
| Paper | Angle |
|---|---|
| GS3 — Science & Technology | AI development, AI governance, LLMs, generative AI |
| GS3 — Economy | AI and labour, IT sector disruption, gig economy |
| GS2 — Governance | Digital India, IndiaAI Mission, AI regulation, data protection |
| GS4 — Ethics | AI ethics, algorithmic bias, human dignity, accountability |
Mains Keywords: IndiaAI Mission, AI governance, EU AI Act, GPAI (Global Partnership on AI), AIRAWAT, Bhashini, Digital Personal Data Protection Act 2023, algorithmic accountability, deepfakes, AI and labour displacement, facial recognition regulation, MeitY, explainability, AI concentration of power
Prelims Facts Corner
| Item | Fact |
|---|---|
| IndiaAI Mission | 2024; Rs 10,372 crore; compute + datasets + skilling |
| AIRAWAT | India’s AI computing cluster; launched 2023 |
| Bhashini | MeitY platform for multilingual AI; Indian languages NLP |
| EU AI Act | 2024; most comprehensive AI regulation globally; risk-based framework |
| GPAI | Global Partnership on AI — multilateral AI governance; India is member |
| DPDP Act 2023 | Digital Personal Data Protection Act — partial coverage for AI data use |
| IT Amendment Rules 2023 | Require platforms to label AI-generated content |
| ILO AI estimates | 300–800 million jobs at risk globally by 2035 |
| NVIDIA market share | ~80% of AI chip (GPU) market |