Editorial Summary: The Indian Express argues that schools and curricula alone cannot equip children for an AI-saturated world. The foundational habits of judgement, verification and ethical use must begin in family conversation, before they ever reach a classroom. Parents – not just policymakers – must model critical engagement with AI tools. The state’s responsibility under Article 21A and NEP 2020 is real and non-negotiable, but it is complementary to, not a substitute for, parental literacy.
The Problem the Curriculum Cannot Reach
Generative AI is the first technology in a century that children are likely to use more intensively at home than at school. Free or low-cost chatbots, image and video generators and AI-tutoring tools are already routine companions to homework, social media and entertainment. By the time a child encounters a structured “AI literacy” module in a Class IX textbook, she will have made hundreds of small choices about whether to trust, copy, verify or share AI output.
The early formation of these habits is happening at home. Three risks emerge:
- Trust without verification: children adopt AI answers as authoritative because adults around them do.
- Disclosure norms: when to acknowledge AI assistance, when to do work without it, what counts as honest authorship – these are negotiated at the dining table, not in a coding lab.
- Privacy and consent: children share personal information with chatbots in ways that have no analogue in pre-AI life; the default of caution must be modelled.
Indian Policy Foundations
Three policy strands are converging:
- NEP 2020 treats AI, computational thinking and digital literacy as cross-cutting competencies from the foundational stage, with curricular hooks at the secondary level.
- Digital Personal Data Protection Act, 2023 imposes consent requirements for processing of children’s data and creates obligations for data fiduciaries.
- IndiaAI Mission – with a Rs 10,372 crore outlay over five years approved in 2024 – includes a Safe and Trusted AI pillar, AIKosh datasets and a Future Skills programme that touches AI education.
The Information Technology Act, 2000, particularly Section 79’s safe-harbour for intermediaries, is being re-examined for AI-generated content liability; the Digital India Act, when notified, is expected to address this directly.
The Parental Literacy Gap
The structural problem is that the adults expected to mediate AI for children are themselves at varying levels of AI literacy. The editorial identifies four practical habits parents can model without technical expertise:
- Ask the source question: when a child uses an AI answer, ask where the AI got it. Demonstrate verification through a second source.
- Slow the impulse: pause before sharing AI-generated content; check for hallucinations and named-entity errors.
- Honest authorship: name the tool used in homework or school projects; treat disclosure as a value, not an embarrassment.
- Privacy minimalism: do not share names, addresses, schools, family details with chatbots; demonstrate this through behaviour, not lectures.
What the State Still Owes
The argument that AI literacy begins at home is not an argument for state withdrawal. Article 21A makes free and compulsory education a fundamental right for children aged 6 to 14; the state cannot outsource the curriculum to parents. The editorial proposes a complementary architecture:
- Teacher capacity: large-scale NCERT-led teacher training on AI literacy, with NEP-aligned modules.
- Curricular updates: foundational digital literacy in primary stages, AI-specific modules in middle and secondary, with hands-on practice.
- Public parental resources: short, multilingual, government-issued AI-parenting guides through Ministry of Education and IndiaAI Mission channels.
- Risk-tiered AI regulation: borrowing from the EU AI Act’s risk-tier framework (unacceptable, high-risk, limited-risk, minimal-risk), India’s forthcoming Digital India Act should classify AI applications affecting children at a higher protective tier.
The International Frame
The EU AI Act, the OECD AI Principles and UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence are converging on a shared standard: AI affecting children should default to the higher protective tier. India’s AI policy has space to embed this principle without slowing innovation.
UPSC Mains Analysis
GS Paper 2 – Government policies, education / GS Paper 3 – Science and technology
Key arguments:
- Generative AI is the first major technology used more intensively by children at home than at school; literacy formation has shifted upstream of the classroom.
- NEP 2020, DPDP Act 2023 and IndiaAI Mission provide a policy spine; the missing layer is parental literacy.
- Article 21A binds the state to deliver AI literacy in schools; parental modelling is complementary, not substitutive.
- Risk-tiered regulation, drawing on the EU AI Act framework, can shield children without throttling innovation.
Counterarguments:
- Expecting parental literacy in a country with persistent digital divides risks reinforcing class-based AI literacy gaps.
- Schools and curricula are the more scalable instruments; under-investing in them while exhorting parents risks abdicating the constitutional duty under Article 21A.
- AI capabilities evolve faster than any curricular or parental guidance can keep up.
Keywords: NEP 2020, Article 21A, IndiaAI Mission Rs 10,372 crore, DPDP Act 2023, Section 79 IT Act, Digital India Act, EU AI Act risk tiers, OECD AI Principles, UNESCO 2021 AI Ethics, AIKosh, generative AI hallucination, AI-content liability.
Editorial Insight
The Indian Express’s view is that the gap between AI capability and AI judgement will widen for the foreseeable future, and that gap is closed first in homes, not in classrooms. Parents do not need to be technologists to be the first teachers. Schools and the state cannot opt out – but if the foundational habits are not laid at home, no curriculum will catch up with them later.