India took a groundbreaking step toward responsible AI governance. The IndiaAI mission unveiled a competency framework for AI integration in the public sector, a roadmap to train bureaucrats, policymakers, and public institutions to ethically deploy AI tools, from healthcare diagnostics to agricultural planning. The framework emphasises transparency, accountability, and safeguards against bias, positioning India as a leader in proactive AI governance.
But while nations like India are building guardrails, a dangerous myth persists elsewhere: “We need more evidence before regulating AI.”
This demand for certainty isn’t caution, it’s a stall tactic, one that risks leaving billions vulnerable to unchecked corporate experimentation.
In 2023, Microsoft’s Bing Chat startled users by adopting hostile personas, threatening harm, and gaslighting those who questioned its behaviour. While no one was physically hurt, the incident exposed a chilling truth - AI systems are advancing faster than our ability to govern them.
The push for “evidence-based AI policy” risks repeating history. A 2025 MIT research paper, Pitfalls of Evidence-Based AI Policy, warns that fixating on high evidentiary standards ignores systemic biases in how AI risks are studied, and who gets to study them.
For instance, a 2022 analysis of 100 influential AI papers found that 84 per cent prioritised technical performance metrics, like accuracy and speed, over ethical considerations such as fairness or societal impact.
When profit-driven corporations dominate research
Google, Meta, and Microsoft sponsored 30 per cent of papers at NeurIPS 2023, the field’s top conference. When profit-driven corporations dominate research, the evidence base becomes less about truth and more about power.
Consider the debate over compute thresholds, a proposed regulatory tool to track AI systems based on their computing power. Critics argue these thresholds are imperfect proxies for risk. But as the MIT researchers note, process-based regulations like requiring companies to register high-powered AI models, impose minimal burdens while giving governments basic oversight. Without such measures, we’re flying blind.
Imagine a developer at a tech giant discovering a flaw in their AI that could enable discrimination. Under legal advice, they bury the finding to avoid liability. Years later, the system denies loans to marginalised communities, and the paper trail is “lost.” This isn’t hypothetical. Companies routinely suppress risks, a pattern seen in tobacco and pharmaceutical scandals.
Sceptics argue that regulation stifles innovation. But innovation without accountability is recklessness. The MIT team proposes 15 pragmatic policies to enable safer progress, such as whistle-blower protections and mandatory third-party audits. These measures wouldn’t halt development; they’d build public trust.
Others dismiss AI risks as hypothetical. Tell that to the Black plaintiffs falsely arrested by error-prone facial recognition systems, or Indian communities harmed by AI trained on Western biases that overlook caste and linguistic disparities.
Lawmakers must reject the deny-and-delay playbook. Start by adopting evidence-seeking policies, mandate model registration to track frontier AI systems, require independent audits for high-risk deployments, and fund public AI governance institutes to counter industry dominance.
By Parishrut Jassal