India's AI Governance Guidelines: What the 7 Sutras Mean for Enterprise Compliance
India has made its position clear: no standalone AI law. Not yet. The AI Governance Guidelines, released by MeitY at the AI Impact Summit 2026, adopt what the government calls a "techno-legal" approach: seven governing principles, three new institutions, and a deliberate preference for voluntary compliance, self-certification, and regulatory sandboxes over prescriptive mandates. The Office of the Principal Scientific Adviser's January 2026 white paper reinforced this direction, advocating for embedding compliance directly into AI system design rather than layering regulation on top.
The instinct for most enterprises will be to treat this as a policy document they can file and forget. That would be a mistake. The Guidelines explicitly state that existing laws, including the DPDP Act 2023, the IT Act 2000, consumer protection statutes, and sector-specific regulations from RBI, SEBI, and IRDAI, already apply to AI systems. The Guidelines don't create new obligations. They clarify how existing obligations apply to AI, and they signal where enforcement is heading. Enterprises that wait for a formal AI law to take action will find that the law catches up to them through the regulators they already report to.
The 7 Sutras: From Principles to Enterprise Obligations
The Guidelines are anchored in seven principles that MeitY calls "sutras." EY India's analysis characterizes them as "innovation-friendly" and "light-touch." That's true at the policy level. At the enterprise level, each sutra translates into specific operational requirements that CISOs, CDOs, and compliance heads need to act on.
Sutra 1: Trust Is the Foundation
What it says: Innovation and adoption stagnate without trust across the AI value chain.
What it means for enterprises: Trust isn't aspirational language. It's a measurable property of AI systems. Does your customer trust that your AI-powered credit scoring model treats them fairly? Does your regulator trust that your AI-driven fraud detection system is auditable? Does your board trust that your AI deployments won't create liability?
Building trust requires transparency about what AI systems do, how they make decisions, and what data they use. For BFSI institutions subject to RBI's IT Governance framework, this means documenting AI model inputs, outputs, and decision logic in a format that satisfies both internal audit and regulatory examination.
Sutra 2: People First
What it says: Human-centric design, human oversight, and human empowerment.
What it means for enterprises: Every AI system that makes decisions affecting individuals needs a human-in-the-loop mechanism. This is particularly relevant for agentic AI systems that operate autonomously. When your AI agent sends an email on behalf of an employee, approves a loan application, or flags a transaction as suspicious, a human must be able to review, override, and be accountable for the outcome.
The practical implication: enterprises deploying AI agents need to define escalation thresholds where autonomous action stops and human review begins. An AI that auto-rejects 50% of insurance claims without human review will eventually trigger regulatory scrutiny, whether from IRDAI, the consumer forum, or both.
Sutra 3: Innovation over Restraint
What it says: Responsible innovation is prioritized over cautionary restraint.
What it means for enterprises: The government wants AI adoption to accelerate, not stall. This is permission to experiment, but within guardrails. The Guidelines specifically recommend regulatory sandboxes for testing AI applications in controlled environments before full deployment. Enterprises should engage with sector regulators (RBI for fintech AI, SEBI for trading algorithms, IRDAI for underwriting models) to establish sandbox arrangements for high-risk AI deployments rather than avoiding deployment entirely out of regulatory uncertainty.
Sutra 4: Fairness and Equity
What it says: Inclusive development, avoiding discrimination, with special attention to marginalized groups.
What it means for enterprises: Bias testing is no longer optional. The Guidelines specifically call out risks to vulnerable groups: children targeted by exploitative recommendation systems, women disproportionately targeted by deepfakes, and caste-based discrimination in automated decision-making. If your AI system makes decisions about hiring, credit, insurance, or service delivery, you need documented bias testing results that demonstrate the system doesn't discriminate across protected categories.
For enterprises likely to be designated as Significant Data Fiduciaries under the DPDP Act, this intersects with the DPIA requirement: algorithmic fairness assessment becomes part of the annual Data Protection Impact Assessment.
Sutra 5: Accountability
What it says: Clear allocation of responsibility based on function and risk.
What it means for enterprises: When an AI system causes harm, who is responsible? The developer who built the model? The deployer who put it into production? The user who configured it? The Guidelines establish that accountability follows function: the entity that determines how an AI system is used bears primary responsibility for its outcomes.
For Indian enterprises, this means the deployer (the company using the AI) is accountable, not the AI vendor. If you deploy a third-party AI model for customer service and it provides harmful advice, you can't point at the vendor and claim ignorance. This has direct implications for vendor agreements: AI vendor contracts need clauses that define responsibility boundaries, indemnification for model failures, and access to model documentation for audit purposes.
Sutra 6: Understandable by Design
What it says: AI systems must provide disclosures and explanations that users and regulators can comprehend.
What it means for enterprises: Explainability isn't a research problem anymore. It's a compliance requirement. When a customer asks why their loan was rejected, "the model said no" isn't an acceptable answer. The system must be able to produce an explanation in language the customer, and the ombudsman, can understand.
For BFSI institutions, this sutra intersects with RBI's existing expectations on customer grievance redressal. Credit decisioning models, whether AI-powered or not, must produce auditable rationales. Enterprises deploying black-box AI models without explainability layers are accumulating regulatory risk with every decision the model makes.
Sutra 7: Safety, Resilience, and Sustainability
What it says: Robust systems that withstand shocks and remain environmentally responsible.
What it means for enterprises: AI systems must be resilient to adversarial attack, robust to input perturbation, and sustainable in their resource consumption. The Claude Mythos disclosure demonstrated that AI systems can be targets as well as tools. Enterprises deploying AI need to assess adversarial robustness: can the model be manipulated through prompt injection? Can training data be poisoned? Can the system be made to produce harmful outputs through carefully crafted inputs?
The sustainability dimension is newer and will become more relevant as Indian data centre capacity scales. AI workloads consume significant compute resources. Enterprises should document the environmental footprint of AI deployments, particularly if listed on Indian exchanges where BRSR Core sustainability disclosures are mandatory.
The Three New Institutions: What They Mean for Your Organization
The Guidelines establish three bodies that will shape how AI governance is enforced:
| Institution | Role | What It Means for Enterprises |
|---|---|---|
| AI Governance Group (AIGG) | Inter-ministerial policy body | Sets the rules. Watch for AIGG guidance that signals where enforcement is heading. When AIGG issues sectoral recommendations, your regulator will follow. |
| Technology & Policy Expert Committee (TPEC) | Scientists and legal experts advising AIGG | Defines what "reasonable" AI governance looks like. TPEC's assessments will become the benchmark against which your practices are judged. |
| AI Safety Institute (AISI) | Testing, standards, international collaboration | Tests AI models for safety. If AISI develops testing protocols, enterprises deploying high-risk AI may be expected to submit to AISI evaluation, particularly in healthcare, finance, and critical infrastructure. |
The practical implication: these institutions don't exist yet in operational form. But they will. And when they do, the first thing they'll ask for is evidence of existing AI governance. Enterprises that can produce an AI inventory, documented risk assessments, and bias testing results will be ahead. Those that can't will be scrambling.
The Guidelines don't create new law. They clarify how existing law applies to AI, and they signal where new enforcement is coming. The enterprises that read this as "nothing to do yet" are the same ones that read the DPDP Act as "nothing to do until SDF designations are announced." Both times, the cost of waiting exceeds the cost of preparing. - SARC Data & AI Practice
The Enterprise Action Plan: What to Do Before It Becomes Mandatory
Action 1: Build an AI system inventory
You cannot govern what you haven't catalogued. Every AI system in your organization, from the customer-facing chatbot to the internal analytics model to the shadow AI tools your employees are using, needs to be documented: what it does, what data it processes, who deployed it, what decisions it influences, and what risks it creates.
The IBM Cost of a Data Breach Report 2025 found that 63% of organizations had no AI governance policies and only 34% conducted regular audits for unsanctioned AI. The first step isn't writing a policy. It's knowing what AI you have.
Action 2: Map AI obligations to your existing regulatory framework
The Guidelines explicitly state that existing laws apply. Map your AI systems against the regulations you already comply with:
- DPDP Act: AI systems processing personal data need consent architecture, DPIA coverage (for SDFs), and vendor compliance for third-party AI processors
- RBI IT Governance: AI in credit decisioning, fraud detection, and customer service falls under the Master Direction's model risk management expectations
- CERT-In: AI system compromises are reportable incidents under the 6-hour notification requirement
- SEBI: Algorithmic trading and AI-driven investment advisory have existing compliance requirements
- Consumer Protection Act: AI systems making consumer-facing decisions must comply with unfair trade practice provisions
Action 3: Adopt ISO/IEC 42001 as your governance backbone
ISO/IEC 42001, published in December 2023, is the first international standard for AI management systems. It provides a certifiable governance framework covering AI risk assessment, control implementation, and continuous improvement. Global enterprises are increasingly requiring 42001 alignment from suppliers. Indian companies serving international clients will need certification sooner than they think.
42001 functions as the structural backbone of AI governance, the same way ISO 27001 functions for information security. It doesn't replace sector-specific requirements; it provides the management system that ensures compliance with all of them.
Action 4: Establish bias testing and explainability for high-risk AI
Sutras 4 and 6 require fairness and understandability. For AI systems that make decisions about credit, insurance, employment, or service delivery, implement:
- Pre-deployment bias testing across protected categories (gender, caste, religion, geography)
- Runtime monitoring for outcome disparities
- Explainability layers that can produce human-readable decision rationales
- Grievance mechanisms for individuals affected by AI decisions
Frequently Asked Questions
Do we need to comply with these Guidelines now? The Guidelines are currently voluntary, not mandatory. But "voluntary" in the Indian regulatory context means "we're telling you what we expect before we make it a requirement." The Guidelines explicitly state that existing laws already apply to AI. Sector regulators like RBI, SEBI, and IRDAI will incorporate these principles into their own frameworks. Enterprises that align now will absorb future mandates smoothly. Those that wait will face compressed timelines.
Is a separate AI law coming? Not immediately. The MeitY committee assessed that a standalone AI law is not needed at this stage. Existing laws (IT Act, DPDP Act, Consumer Protection Act, sector-specific regulations) provide adequate coverage. Legal amendments may be considered for specific gaps like copyright for AI training data and platform classification. But the direction is clear: AI governance through existing regulatory channels, not a new horizontal statute.
How does this interact with the DPDP Act? Directly. AI systems that process personal data are subject to all DPDP Act obligations: consent, purpose limitation, data minimization, security safeguards, breach notification, and individual rights. For Significant Data Fiduciaries deploying AI, the annual DPIA requirement must cover AI-specific risks: algorithmic bias, automated decision-making impact, and data quality for model training.
Should we get ISO 42001 certified? If you serve international clients or operate in regulated sectors, yes. ISO 42001 is rapidly becoming a procurement requirement for global enterprises. Certification demonstrates governance maturity and provides a structured framework for managing AI risk. Even if certification isn't immediately required, adopting the 42001 framework as an internal standard provides the management system backbone that the AI Governance Guidelines expect.
What's the role of the AI Safety Institute? AISI will conduct safety testing, develop standards, and participate in international safety networks. For enterprises, AISI's standards will become the benchmark for "reasonable" AI safety measures. If AISI develops testing protocols for high-risk AI applications (healthcare diagnostics, financial decisioning, autonomous vehicles), enterprises in those sectors should expect to demonstrate compliance with AISI standards.
SARC's Data & AI Practice helps enterprises build AI governance programs aligned with MeitY's Guidelines, ISO 42001, and sector-specific regulatory requirements. From AI system inventory development to bias testing, explainability frameworks, and board-level AI risk briefings, we help organizations govern AI responsibly without slowing innovation.
Our advisory team is ready to help.

