Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

India Issues 2025 AI Governance Guidelines: How It Compares to Other Global AI Acts

By Bindu Janardhanan, Scott Warren & Tanvi Mehta Krensel on December 22, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

In 2025, India’s approach on AI has shifted significantly from, “Will AI change the way business is done?” to “What is the best way to adopt it to enable business expansion?” Guided by the principles of People, Planet, and Progress, “Safe and trusted AI for all” has become the motto governing India’s approach to AI. The evolving digital infrastructure, specific sector-driven regulation, techno-legal philosophy, strength of the powerful Global South, and a strong inclusion narrative are cornerstones to India’s AI journey.

AI and Global Governance

There are several basic models for AI governance that are emerging globally.

The European Union

The EU establishes in essence a single, horizontal rule on AI. It classifies systems as per the level of risk posed to the end user: specifically unacceptable, high, limited, and minimal. It forbids certain specific actions and provides for strict documentation, compliance, and penalty regimes for high-risk AI. For instance, Chapter II, Article 5 of the EU AI Act provides that deploying subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making, causing significant harm, is prohibited and rated as unacceptable under the Act.

United States

The United States currently lacks federal AI legislation, except for concerns relating to AI affecting national security. However, several states have enacted their own AI laws. Colorado, Utah, Illinois, California and Texas all have their own AI Acts that tend to focus on bias, discrimination and civil rights in hiring and employment, as well as profiling. Much of this is already prohibited by various State privacy laws. Notably, President Trump has just issued an Executive Order prohibiting other states from passing further AI-focused laws. It will be interesting to see how this Order may impact US States going forward.  

APAC

Many APAC jurisdictions aim to balance AI driven innovation with safeguards against potential abuse. 

  • South Korea: new AI provisions align closest with the EU’s risk-based approach. 
  • China: legislation emphasizes risk of social unrest and other national security threats posed by generative AI, although the courts have weighed in on private sector misuse.
  • Japan: the focus is on allowing free development, but it also recently established a Cabinet-level office to monitor AI deployment and usage to adapt its policies as needed. 
  • APEC: adopted the APEC AI Initiative (2026-30) on November 4, 2025, which prioritizes AI infrastructure development in the region over restrictions on AI. 
  • Australia: released its National AI Plan on December 2, 2025, which moves away from an EU-style approach and instead emphasizes regulating AI under existing laws, supported by a newly formed regulatory body, the National AI Safety Institute.

India’s 2025 AI Governance Guidelines

India’s AI Governance Guidelines reflect its development priorities, diversity, and the evolving digital capabilities. They are structured into four main sections:

  • An action plan which outlines short-, medium-, and long-term actions, such as creating organizations and incident databases, starting sandboxes, and developing existing legislation;
  • Practical guidelines entities should adopt.
  • Six pillars of governance: a set of recommendations under which India aims to increase data access, provide specific amendments, devise risk tools tailored to India, define liability, and develop AI focused institutions. They span:
    • Infrastructure
    • Capacity Building
    • Policy and Regulation
    • Risk Mitigation
    • Accountability
    • Institutions.
  • Seven “sutras,” a set of guiding principles around AI development:  
    • Trust is the Foundation
    • People First
    • Innovation Over Restraint
    • Fairness & Equity
    • Accountability
    • Understandable by Design
    • Safety, Resilience & Sustainability

What’s distinctive about India’s approach?

Several factors seem to set India’s strategy apart from others. For example, India has already been regulating AI through existing laws, new guidelines, and sector-specific rules, such as those issued by the Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI), rather than a single or overarching AI statute.

With innovation a core focus, India intends to follow a “hands-off” approach to encourage new AI development while addressing harms through existing laws. The country’s strategy is to leverage AI for economic growth by focusing on the application of AI and using existing laws for specific issues like data privacy and discrimination.

The principle behind the sutras is that innovation should take precedence over preventative restrictions, while maintaining obligations related to safety, accountability, and fairness. The key word is accountability. India’s desired goal is to seek a more direct balance between risk and growth than perhaps the obligations imposed under the EU AI Act.

India already has rolled out various extensive and unique Digital Public Infrastructure (DPI) AI platforms, which it hopes to leverage in this implementation.  These include digital solutions such as Aadhaar (equivalent to a citizen id or social security number), UPI, Digi Locker, and various data exchanges. This approach intends to utilize existing, shared digital rails for identity verification, payments, and data exchange to efficiently deliver services in crucial sectors such as healthcare, agriculture, education, and welfare, particularly focusing on low-income and rural communities.

The guidance clarifies how important it is to include legal requirements directly in AI system architecture. Some examples of this are technologies that protect privacy, standards for content authentication (like C2PA-style watermarking), and DEPA for training AI. This idea of “compliance-by-design,” similar to ‘privacy-by-design’ initially introduced in GDPR, goes beyond the stated ideas in many of the AI obligations stated elsewhere to date. 

Further, India plans to set up a Technology Policy Expert Committee, an AI Governance Group (AIGG) for high-level collaborations and ordination, and a special AI Safety Institute to test models, to set standards and participate in international safety networks.

The guidelines provide for a risk assessment and classification system that focuses on national security issues and harms that may be caused to vulnerable groups (for example, deepfakes aimed at women, child safety, language, and caste bias) instead of relying on generic risk grids. This social-context approach is thought to be a better fit for India’s population and diversity than many “one-size-fits-all” models from around the world.

India focuses on using voluntary commitments, self-certifications, transparency reports, and third-party audits before putting strict responsibilities for AI, with a desire to provide stronger incentives like sandbox access, reputational badges, and targeted support. The systematic use of incentives to promote voluntary protections seems more common than in many other regimes.

India’s role in the governance of AI worldwide

It appears India aims to leverage AI governance as a diplomatic tool, particularly within the Global South, while also fostering local economic growth. The recommendations place India’s balanced, DPI-enabled, inclusion-first model at the center of global discussions, calling for active engagement in multilateral forums such as the G20, UN, OECD, and other similar bodies.

Through these efforts, India seeks to shape international standards in areas like child safety, content authentication, and safety testing – supported by initiatives such as hosting an AI Impact Summit and joining networks of AI safety institutes. At the same time, India aims to demonstrate that open, interoperable platforms can be used to deliver solutions that can be adopted widely. A combination of normative leadership (e.g. guiding principles, safety norms) and practical infrastructure (DPI, AI Mission, GPUs, and AI Kosh datasets) are what sets India apart from the rest of the world in its approach to AI governance.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

Stay Ahead on Consumer Privacy News

Not a subscriber yet? Subscribe here to be among the first to receive timely updates on the fast-moving world of data privacy, security, and innovation—delivered straight to your inbox.

Looking for deeper insights and expert analysis? You can also subscribe here to our privacy attorneys’ marketing communications for thought leadership and rich content when you need a more comprehensive perspective.

Photo of Scott Warren Scott Warren
Read more about Scott Warren
Photo of Tanvi Mehta Krensel Tanvi Mehta Krensel
Read more about Tanvi Mehta Krensel
  • Posted in:
    Privacy & Data Security
  • Blog:
    Privacy World
  • Organization:
    Squire Patton Boggs
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo