AI is changing many facets of our existence. People of all ages are changing the way they write, translate, study, teach, market, and do business. India had 1,002.85 million internet users by June 2025, against a population of 1.45 billion in 2024. Because the country is so big and spread out, even tiny mistakes, flaws, or mischief can quickly escalate into serious difficulties and we can witness wider repressions. At this level, moral questions are very important for our protection. We are making the future happen right now, and millions of people are watching us do it. (From the World Bank Open Data).
Effective AI governance in India goes beyond faster services or smarter decisions. It requires that every AI tool operates with fairness, transparency, and accountability. As AI becomes integrated into education, healthcare, public services, and social media, ethical safeguards are essential. These measures help prevent bias, deepfake-related damage/harm, and misuse before minor issues escalate into widespread problems.
Deepfakes that undermine public trust, together with biases and prejudices in vernacular language models, are risks that are rising at an alarming rate. Waiting for a crisis to happen might hurt public trust and require quick action and care.
Vernacular Large Language Models Beyond Accuracy Have a Bias
People sometimes use language skills to assess how well AI models for Indian languages are performing in the early stages of their development or improvement. But keeping justice is a harder and more important job.
Researchers developed a benchmark, Indian-BhED, to examine biases and preconceptions about India, including those based on caste, class, and religion. The findings are somewhat alarming: several widely used models exhibited a significant bias towards stereotypical outputs for religion (69–72%) and caste (63–79%). Using assistants in Hindi, Tamil, Bengali, or Marathi based on these models could quickly make preconceived ideas worse if they are not tested and fixed properly.
The 2025 ACL paper that introduced INDIC-BIAS looked at biases and stereotypes across 85 identification groups in four areas: caste, religion, location, and tribes. Before a model may be used in India, it must meet certain requirements.
Bias in vernacular AI manifests as stereotypes, contempt, or false information in local languages that sound authoritative, leading people to believe them.
India’s current digital environment may exacerbate these effects. The Internet and Mobile Association of India (IAMAI) 2024 study says that India will have 886 million internet users in 2024, and 20% of them will use shared devices. A single biased reaction on a shared device could soon cause greater harm to society. (IAMAI)
Deepfakes Show How Quickly People Are Losing Trust in the Public
Bias is a slow threat, while deepfakes are an immediate and dangerous problem. They also make it harder for people to find common ground, which harms privacy.
In January 2026, Reuters reported that Elon Musk’s xAI chatbot Grok was getting a lot of bad press coverage for making sexualized photographs, which led to action by regulators in various countries. India requested a compliance report and issued a notice to take down the content. (Source: Reuters)
Deepfakes are making it easier for people to commit fraud. The Entrust Identity Fraud Report, 2024 says there were “staggering 3,000% more” deepfake attempts from 2022 to 2023, a 31-fold increase.
Removing content often takes longer than planned, which raises the potential of harm.
A WIRED study in 2024 found that Google got more than 13,000 DMCA complaints about almost 30,000 URLs on deepfake porn sites. About 82% of the reported links were removed from search results. (WIRED) Elections make these hazards worse because a single viral video can change people’s minds in minutes. In late 2025, the Indian Election Commission told political parties and activists to mark any AI-generated or synthetic content on social media as fake news. (Indian Times)
What moral rules should India have right now?
But strict restrictions and laws should not get in the way of new businesses and ideas. Instead, they should set safety rules that everyone must follow, especially in high-risk sectors.
Make it necessary to check for bias in applications that have a big effect. In industries such as education, hiring, financing, healthcare, and public services, it is important to use India-specific standards (such as Indian-BHED and INDIC-BIAS) and to write a short safety note in simple English.
Ask for disclosure and give them a way to file appeals. People need to understand AI systems and be able to review them when an AI tool harms them.
Social media needs to have strong protections in place. India’s draft proposal from October 2025 suggests that AI-generated content should include visible labels, such as covering 10% of a picture or the first 10% of an audio file. These steps should include time-limited takedowns, quick reporting systems, and penalties for anyone who keeps breaking the rules. (Source: Reuters)
To keep innovation and public trust, quick, proactive steps must be taken. If trust has been broken, harsher rules may be needed to keep people from getting involved. We should ensure that everyone who relies on these technologies has a basic level of security and protection.


