India's recent proposal to elevate the Global Partnership on Artificial Intelligence (GPAI) to an intergovernmental body on AI has garnered significant attention in the international community. This move, while ambitious, raises important questions about the future of AI governance and regulation on a global scale. This brief examines and comments upon India's bid to enable the Global Partnership on Artificial Intelligence as an AI regulator, with special emphasis on the Global South outlining the key challenges associated with GPAI, MeITY and the AI Landscape we have today.
India's Leadership in GPAI
India, as the current chair of GPAI, has been instrumental in expanding the initiative to include more countries, aiming to transform it into a central body for global AI policy-making. The GPAI, which started with 15 nations, has now expanded to 29 and aims to include 65 countries by next year.
India's leadership in GPAI was further solidified when it was elected as the Incoming Council Chair in November 2022, securing a significant majority of the first-preference votes.
Throughout the 2022-23 term, India served as the Incoming Support Chair, and on December 12, 2023, it assumed the Lead Chair position for the 2023-24 term. India is also set to serve as the Outgoing Support Chair in the subsequent year, showcasing its continued dedication to GPAI's mission.
India's commitment to advancing GPAI's goals was prominently displayed when it hosted the GPAI Annual Summit from December 12th to 14th, 2023. This summit brought together representatives from all 29 GPAI member countries under India's guidance to discuss a wide range of AI-related topics. The event, organized by MeitY, was inaugurated by Prime Minister Shri Narendra Modi, who reiterated India's commitment to leveraging AI for societal betterment and equitable growth while emphasizing the importance of responsible, human-centric AI governance.
The Role of MeitY
It is to be understood that the Ministry of Electronics and Information Technology (MeitY) has been pivotal in negotiating the inclusion of OECD nations and advocating for greater participation from the Global South in AI regulation. MeitY has been at the forefront of India's efforts in GPAI, organizing key events such as the GPAI Annual Summit in 2023. However, MeitY's approach to AI governance has faced criticism for its reactive and arbitrary nature.
The recent advisory issued by MeitY on the use of AI in elections was met with strong backlash from the AI community. The advisory required platforms offering "under-testing/unreliable" AI systems or large language models (LLMs) to Indian users to explicitly seek prior permission from the central government. This was seen as regulatory overreach that could stifle innovation in the nascent AI industry.
While the government later clarified that the advisory was aimed at significant platforms and not startups, the incident highlighted the need for a more proactive and consultative approach to AI regulation.
Moreover, the complexity and breadth of AI policy suggest that a single ministry may not be sufficient to handle all aspects of AI governance. A more integrated, inter-ministerial approach could enhance India's capacity to lead effectively in this domain. The inter-ministerial committee formed by MeitY with secretaries from DoT, DSIR, DST, DPIIT, and NITI Aayog as members is a step in this direction. The taking over of AI Policy by the Principal Scientific Advisor's Office in April / May 2024 was another step.
However, the composition of such bodies, including the proposed National Research Foundation (NRF), has been criticized for having too many bureaucrats and fewer specialists. The NRF, which aims to provide high-level strategic direction for scientific research in India, will be governed by a Governing Board consisting of eminent researchers and professionals across disciplines.
To truly foster responsible and inclusive AI development, MeitY and other government bodies must adopt a more collaborative and transparent approach. This should involve engaging with a wide range of stakeholders, including AI experts, civil society organizations, and industry representatives, to develop a comprehensive and balanced regulatory framework. Additionally, capacity building within the government, including training officials in AI technologies and their implications, is crucial for effective governance.
Timing and Nature of AI Regulation
The AI regulation debate spans a wide spectrum of views, from those who believe that the current "moral panic" about AI is overblown and irrational[2], to those who advocate for varying degrees of regulation to address the risks posed by AI.
The European Union (EU) is at the forefront of AI regulation, with its proposed AI Act that classifies AI systems into four tiers based on their perceived risk[2]. The EU's approach is seen as more interventionist compared to the hands-off approach favored by some venture capitalists and tech companies. However, even within the EU, there are differing opinions on the timing and scope of AI regulation, with some arguing that premature regulation could stifle innovation in the nascent AI industry[1].
Many experts propose a risk-based approach to AI regulation, where higher-risk AI applications that can cause greater damage are subject to proportionately greater regulation, while lower-risk applications have less[2]. However, implementing such an approach is challenging, as it requires defining and measuring risk, setting minimum requirements for AI services, and determining which AI uses should be deemed illegal.
Given the challenges in establishing comprehensive AI regulations at this stage, some experts like Gary Marcus have proposed the creation of an International AI Agency, akin to CERN, which conducts independent research free from market influences[3]. This approach would allow for the development of AI in a responsible and ethical manner without premature regulatory constraints. The proposed agency would focus on groundbreaking research to address technical challenges in developing AI with secure and ethical objectives, and on establishing robust AI safety measures to mitigate potential risks[3].
Advocates for AI safety stress the importance of initiatives like a CERN for AI safety and Global AI governance to effectively manage risks[3]. They emphasize the need to balance the focus on diverse risks within the AI landscape, from immediate concerns around bias, transparency, and security, to long-term risks such as the potential loss of control over future advanced machines[3].
In navigating the complexities of AI governance, ongoing dialogue underscores the critical role of research in understanding and addressing AI risks[3]. While some argue that AI-related harm has been limited to date, the evolving landscape highlights the need for proactive measures to avert potential misuse of AI technologies[3].
As of February 2024, the global AI regulatory landscape continues to evolve rapidly. The EU's AI Act has been signed by the Committee of Permanent Representatives, and the consolidated text has been published by the European Parliament[4]. The European Commission has also adopted its own approach to AI, focusing on fostering the development and use of lawful, safe, and trustworthy AI systems[4].
In Asia, Singapore's AI Verify Foundation and Infocomm Media Development Authority have released a draft Model AI Governance Framework for Generative AI, which is currently open for consultation[4]. The Monetary Authority of Singapore has also concluded the first phase of Project MindForge, which seeks to develop a risk framework for the use of Generative AI in the financial sector[4].
These developments underscore the ongoing efforts to establish effective AI governance frameworks at both regional and global levels. As the AI landscape continues to evolve rapidly, finding the right balance between innovation and risk mitigation will be crucial in shaping the future of AI regulation.
GPAI and the Global South
India's commitment to representing the interests of the Global South in AI governance is commendable, but it also faces several challenges and criticisms. One of the primary concerns is the ongoing debate around the moratorium on customs duties on electronic transmissions at the World Trade Organization (WTO)[5]. Many developing countries, including India, argue that the moratorium disproportionately benefits developed countries and limits the ability of developing nations to generate revenue and promote digital industrialization.
India's position is that all policy options, including the imposition of customs duties on e-commerce trade, should be available to WTO members to promote digital industrialization[5]. It has highlighted the potential tariff revenue losses of around $10 billion annually for developing countries due to the moratorium[5]. This revenue could be crucial for developing countries to invest in digital infrastructure and capacity building, which are essential for harnessing the benefits of AI.
However, navigating this complex issue will require careful diplomacy and a nuanced approach that balances the interests of developing and developed countries. India will need to work closely with other countries in the Global South to build consensus around a common position on the moratorium and advocate for a more equitable global trade framework that supports the digital industrialization aspirations of developing nations.
Another criticism faced by India in its advocacy for the Global South is the unequal access to AI research and development (R&D) among developing nations[6].
The AI Index Fund 2023 reveals that private investments in AI from 2013-22 in the United States ($250 billion) significantly outpace those of other economies, including India and most other G20 nations[6]. This disparity in AI R&D access could lead to extreme outcomes for underdeveloped nations, such as economic threats, political instability, and compromised sovereignty[6].
To address this challenge, India must focus on building partnerships and sharing best practices in AI development and governance with other countries in the Global South[6]. Collaborations aimed at developing AI solutions tailored to the specific needs of these regions, such as in agriculture, healthcare, and education, can help ensure that AI benefits are more equitably distributed[6].
Historical Context and Capacity Building
Historically, other nations have employed similar strategies to gain influence in international organizations. For instance, China has been actively involved in the World Intellectual Property Organization (WIPO) and the International Telecommunication Union (ITU). As of March 2020, China led four of the 15 UN specialized agencies and was aiming for a fifth[5]. In the case of WIPO, China has used its influence to shape global intellectual property rules in its favor, such as by pushing for the adoption of the Beijing Treaty on Audiovisual Performances in 2012[7].
Similarly, the United States and Soviet Russia have played significant roles in shaping Space Law. The Outer Space Treaty of 1967, which forms the basis of international space law, was largely a result of negotiations between the US and the Soviet Union during the Cold War era[8]. This treaty set the framework for the peaceful use of outer space and prohibited the placement of weapons of mass destruction in orbit.
France has also been a key player in international organizations, particularly in WIPO and the International Civil Aviation Organization (ICAO). France is one of the most represented countries in WIPO, with a strong presence in various committees and working groups[8]. In ICAO, France has been a member of the Council since the organization's inception in 1947 and has played a significant role in shaping international aviation standards and practices.
However, unlike these countries, India faces significant challenges in terms of capacity building, particularly in the field of artificial intelligence (AI). While India has made notable progress in developing its AI ecosystem, it still lags behind countries like China and the United States in terms of investment, research output, and talent pool. According to a report by the Observer Research Foundation, India faces several key challenges in driving its AI ecosystem, including a lack of quality data, inadequate funding for research and development, and a shortage of skilled AI professionals[9].
To effectively lead in global AI governance, India must address these capacity building challenges by investing in AI research and development, fostering partnerships between academia and industry, and creating an enabling environment for AI innovation through supportive policies and regulations. Only by strengthening its domestic AI capabilities can India play a more influential role in shaping the future of AI governance on the international stage.
Strengthening domestic AI infrastructure and capabilities is crucial for India to effectively lead in global AI governance. India's approach to AI development has been distinct from the tightly controlled government-led model of China and the laissez-faire venture capital-funded hyper-growth model of the US.
Instead, India has taken a deliberative approach to understand and implement supportive strategies to develop its AI ecosystem. This involves balancing the need to develop indigenous AI capabilities while creating an enabling environment for innovation through strategic partnerships.
However, India faces several challenges in building its AI capacity. One major hurdle is the shortage of skilled professionals in data science and AI. According to a NASSCOM report, India faces a demand-supply gap of 140,000 in AI and Big Data analytics roles. Investing in talent development and fostering partnerships with academia is crucial to address this talent gap [10].
Another challenge is the quality and accessibility of data. Many organizations face issues with data standardization and inconsistencies, which can hinder AI model training and accuracy. Investing in technologies like graph and vector databases can help enhance the reliability, performance, and scalability of AI systems. Additional challenges also include the Government's lack of interest to support Indian MSMEs and research labs to build AI solutions without the fear to have lack of funds to buy compute.
Proposing GPAI as an International AI Agency
Given the considerations discussed earlier, the best course of action for India might be to propose transforming GPAI into an international AI agency rather than a regulatory body. This approach would align with India's strengths in Digital Public Infrastructure (DPI) and allow for a more collaborative and inclusive approach to AI development and governance.
India's success in building DPI, such as the Unified Payments Interface (UPI), Aadhaar, and the Open Network for Digital Commerce (ONDC), has been widely recognized. The UNGA President recently praised India's trajectory, stating that it exemplifies how DPI facilitates equal opportunities. India can leverage its expertise in DPI to shape the future of AI governance through GPAI.
Transforming GPAI into an international AI agency would enable it to focus on fostering international cooperation and independent research. This approach is crucial given the rapid evolution of AI technologies and the need for a collaborative, multi-stakeholder approach to AI governance. Otherwise, a regulator made out of half-baked interests could stifle the path of AI innovation even in India & the Global South, such that the risk of regulatory subterfuge, sabotage & capture by fiduciary interest groups may loom large.
An international AI agency could bring together experts from various fields, including AI, ethics, law, and social sciences, to address the complex challenges posed by AI. India's proposal to transform GPAI into an international AI agency was discussed at the 6th meeting of the GPAI Ministerial Council held on 3rd July 2024 in New Delhi. The proposal received support from several member countries, who recognized the need for a more collaborative and research-focused approach to AI governance [11].
To effectively shape the future of AI governance, India must also focus on building domestic AI capabilities and infrastructure. The National Strategy for Artificial Intelligence, released by NITI Aayog, outlines a comprehensive plan to develop India's AI ecosystem. The strategy focuses on five key areas: research and development, skilling and reskilling, data and computing infrastructure, standards and regulations, and international collaboration.
Implementing the National Strategy for Artificial Intelligence will be crucial for India to effectively lead in global AI governance. This includes investing in AI research and development, fostering partnerships between academia and industry, and creating an enabling environment for AI innovation through supportive policies and regulations.
How can GPAI inspire its efforts from AIACT.IN Version 3?
AIACT.IN Version 3, released on June 17, 2024, is India's first privately proposed comprehensive regulatory framework for artificial intelligence. This groundbreaking legislation introduces several key features designed to ensure the safe, ethical, and responsible development and deployment of AI technologies in India. Hopefully, the proposals of AIACT.IN v3 may be helpful for our intergovernmental stakeholders at GPAI.
Here are some key ways GPAI can draw inspiration from AIACT.IN Version 3 in its efforts:
Enhanced AI Classification Methods: GPAI can adopt AIACT.IN V3's nuanced approach to classifying AI systems based on conceptual, technical, commercial, and risk-centric methods. This would enable GPAI to better evaluate and regulate AI technologies according to their inherent purpose, features, and potential risks on a global scale.
National AI Use Case Registry: GPAI can establish an international registry for AI use cases, similar to the National Registry proposed in AIACT.IN V3. This would provide a clear framework for tracking and certifying both untested and stable AI applications across member countries, promoting transparency and accountability.
Balancing Innovation and Risk Mitigation: AIACT.IN V3 aims to balance the need for AI innovation with the protection of individual rights and societal interests. GPAI can adopt a similar approach in its global efforts, fostering responsible AI development while safeguarding against potential misuse.
AI Insurance Policies: Drawing from AIACT.IN V3's mandate for insurance coverage of high-risk AI systems, GPAI can develop international guidelines for AI risk assessment and insurance. This would help manage the risks associated with advanced AI technologies and ensure adequate protection for stakeholders worldwide.
AI Pre-classification: GPAI can implement an early assessment mechanism for AI systems, inspired by the AI pre-classification proposed in AIACT.IN V3. This would enable proactive evaluation of potential risks and benefits, allowing for timely interventions and policy adjustments.
Guidance Principles for AI Governance: AIACT.IN V3 provides guidance on AI-related contracts and corporate governance to promote responsible practices. GPAI can develop similar international principles and best practices to guide AI governance across member countries, fostering consistency and cooperation.
Global AI Ethics Code: Building on the National AI Ethics Code in AIACT.IN V3, GPAI can work towards establishing a flexible yet robust global ethical framework for AI development and deployment. This would provide a common foundation for responsible AI practices worldwide.
Collaborative Approach: AIACT.IN V3 was developed through a collaborative effort involving experts from various domains. GPAI can strengthen its multi-stakeholder approach, engaging AI practitioners, policymakers, industry leaders, and civil society representatives to develop comprehensive and inclusive AI governance frameworks.
Conclusion
In conclusion, India's proactive stance in AI governance is commendable, but the path forward requires careful consideration of domestic capabilities, international dynamics, and the evolving nature of AI.
Thanks for reading this insight.
Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train.
We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com.
References
[1] EY, 'How to navigate global trends in Artificial Intelligence regulation' (EY, 2023) <https://www.ey.com/en_in/ai/how-to-navigate-global-trends-in-artificial-intelligence-regulation> accessed 3 July 2024.
[2] James Andrew Lewis, 'AI Regulation is Coming: What is the Likely Outcome?' (Center for Strategic and International Studies, 18 May 2023) <https://www.csis.org/blogs/strategic-technologies-blog/ai-regulation-coming-what-likely-outcome> accessed 3 July 2024.
[3] Gary Marcus, 'A CERN for AI and the Global Governance of AI' (Marcus on AI, 2 June 2023) <https://garymarcus.substack.com/p/a-cern-for-ai-and-the-global-governance> accessed 3 July 2024.
[4] Eversheds Sutherland, 'Global AI Regulatory Update - February 2024' (Eversheds Sutherland, 26 February 2024) <https://www.eversheds-sutherland.com/en/global/insights/global-ai-regulatory-update-february-2024> accessed 3 July 2024.
[5] Murali Kallummal, 'WTO's E-commerce Moratorium: Will India Betray the Interests of the Global South?' (The Wire, 10 June 2023) <https://thewire.in/trade/wtos-ecommerce-moratorium-india-us> accessed 3 July 2024.
[6] Business Insider India, 'India to host Global India AI Summit 2024 in New Delhi on July 3-4' (Business Insider India, 1 July 2024) <https://www.businessinsider.in/tech/news/india-to-host-global-india-ai-summit-2024-in-new-delhi-on-july-3-4/articleshow/111398169.cms> accessed 3 July 2024.
[7] Yeling Tan, 'China and the UN System – the Case of the World Intellectual Property Organization' (Carnegie Endowment for International Peace, 3 March 2020) <https://carnegieendowment.org/posts/2020/03/china-and-the-un-system-the-case-of-the-world-intellectual-property-organization?center=global&lang=en> accessed 3 July 2024.
[8] Jérôme Sgard, 'Bretton Woods and the Reconstruction of Europe' (2018) 44(4) The Journal of Economic History 1136 <https://www.jstor.org/stable/45367420> accessed 3 July 2024.
[9] WIPO, 'Information by Country: France' (WIPO) <https://www.wipo.int/directory/en/details.jsp?country_code=FR> accessed 3 July 2024.
[10] Trisha Ray and Akhil Deo, 'Digital Dreams, Real Challenges: Key Factors Driving India's AI Ecosystem' (Observer Research Foundation, 12 April 2023) <https://www.orfonline.org/research/digital-dreams-real-challenges-key-factors-driving-indias-ai-ecosystem> accessed 3 July 2024.
[11] Courtney J. Fung, 'China already leads 4 of the 15 U.N. specialized agencies — and is aiming for a 5th' (The Washington Post, 3 March 2020) <https://www.washingtonpost.com/politics/2020/03/03/china-already-leads-4-15-un-specialized-agencies-is-aiming-5th/> accessed 3 July 2024.
Komentarze