top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

Writer's pictureAbhivardhan

Why AI Standardisation & Launching AIStandard.io & Re-introducing IndoPacific.App



Artificial Intelligence (AI) is widely recognized as a disruptive technology with the potential to transform various sectors globally. However, the economic value of AI technologies remains inadequately quantified. Despite numerous reports on AI ethics and governance, many of these efforts have been inconsistent and reactionary, often failing to address the complexities of regulating AI effectively. Even India's MeitY AI Advisory, which faces constitutional challenges, was a result of knee-jerk reactions.



Many companies are hastily deploying AI without a comprehensive understanding of its limitations, resulting in substandard or half-baked solutions that can cause more harm than good.




While AI solutions have demonstrated tangible benefits in B2B sectors such as agriculture, supply chain management, human resources, transportation, healthcare, and manufacturing, the impact on B2C segments like creative, content, education, and entertainment remains unclear. The long-term impact of RoughDraft AI or GenAI should be approached with caution, and governments worldwide should prioritize addressing the risks associated with the misuse of AI, which can affect the professional capabilities of key workers and employees involved with AI systems.


This article aims to explain why AI standardization is necessary and what can be achieved through it in and for India. With the wave of AI hype, legal-ethical risks surrounding substandard AI solutions, and a plethora of AI policy documents, it is crucial to understand the true nature of AI and its significance for the majority of the population.


By establishing comprehensive ethics principles for the design, development, and deployment of AI in India, drawing from global initiatives but grounded in the Indian legal and regulatory context, India can harness the potential of AI while mitigating the associated risks, ultimately leading to a more robust and ethical AI landscape.


The Hype and Reality of AI in India




The rapid advancement of Artificial Intelligence (AI) has generated significant excitement and hype in India. However, it is crucial to separate the hype from reality and address the challenges and ethical considerations that come with AI adoption.


The Snoozefest of AI Policy Jargon: Losing Sight of What Matters


In the midst of the AI hype train, we find ourselves drowning in a deluge of policy documents that claim to provide guidance and clarity, but instead leave us more confused than ever. These so-called "thought leaders" and "experts" seem to have mastered the art of saying a whole lot of nothing, using buzzwords and acronyms that would make even the most seasoned corporate drone's head spin.


Take, for example, the recent advisory issued by the Ministry of Electronics and Information Technology (MeitY) on March 1, 2024.This masterpiece of bureaucratic jargon manages to use vague terms like "undertested" and "unreliable" AI without bothering to define them or provide any meaningful context. It's almost as if they hired a team of interns to play buzzword bingo and then published the results as official policy.


Just a few days later, on March 15, the government issued yet another advisory, this time stipulating that AI models should only be accessible to Indian users if they have clear labels indicating potential inaccuracies or unreliability in the output they generate. Because apparently, the solution to the complex challenges posed by AI is to slap a warning label on it and call it a day.

And let's not forget the endless stream of reports, standards, and frameworks that claim to provide guidance on AI ethics and governance. From the IEEE's Ethically Aligned Design initiativeto the OECD AI Principles, these documents are filled with high-minded principles and vague platitudes that do little to address the real-world challenges of AI deployment.


Meanwhile, the actual stakeholders – the developers, researchers, and communities impacted by AI – are left to navigate this maze of jargon and bureaucracy on their own. Startups and SMEs struggle to keep up with the constantly shifting regulatory landscape, while marginalized communities bear the brunt of biased and discriminatory AI systems.


It's time to cut through the noise and focus on what really matters: developing AI systems that are transparent, accountable, and aligned with human values. We need policies that prioritize the needs of those most impacted by AI, not just the interests of big tech companies and investors. And we need to move beyond the snoozefest of corporate jargon and engage in meaningful, inclusive dialogue about the future we want to build with AI.


So let's put aside the TESCREAL frameworks and the buzzword-laden advisories, and start having real conversations about the challenges and opportunities of AI. Because at the end of the day, AI isn't about acronyms and abstractions – it's about people, and the kind of world we want to create together.


Overpromising and Underdelivering


Many companies in India are rushing to deploy AI solutions without fully understanding their capabilities and limitations. This has led to a proliferation of substandard or half-baked AI products that often overpromise and underdeliver, creating confusion and mistrust among consumers. The excessive focus on generative AI and large language models (LLMs) has also overshadowed other vital areas of AI research, potentially limiting innovation.


Ethical and Legal Considerations


The integration of AI in various sectors, including healthcare and the legal system, raises complex ethical and legal questions. Concerns about privacy, bias, accountability, and transparency need to be addressed to ensure the responsible development and deployment of AI. The lack of clear regulations and ethical guidelines around AI in India has created uncertainty and potential risks.


Policy and Regulatory Challenges


India's approach to AI regulation has been reactive rather than strategic, with ad hoc responses and unclear guidelines. The recent AI advisory issued by the Ministry of Electronics and Information Technology (MeitY) has faced criticism for its vague terms and lack of legal validity. There is a need for a comprehensive legal framework that addresses the unique aspects of AI while fostering innovation and protecting individual rights.


Balancing Innovation and Competition


AI has the potential to drive efficiency and innovation, but it also raises concerns about market concentration and anti-competitive behavior. The Competition Commission of India (CCI) has recognized the need to study the impact of AI on market dynamics and formulate policies that effectively address its implications on competition.


What's Really Happening in the "India" AI Landscape?


Lack of Settled Legal Understanding of AI


India currently lacks a clear legal framework that defines AI and its socio-economic and juridical implications. This absence of settled laws has led to confusion among the judiciary and executive branches regarding what can be achieved through consistent AI regulations and guidelines[1].


A recent advisory issued by the Ministry of Electronics and Information Technology (MeitY) in March 2024 aimed to provide guidelines for AI models under the Information Technology Act. However, the advisory faced criticism for its vague terms and lack of legal validity, highlighting the challenges posed by the current legal vacuum[2].

The ambiguity surrounding AI regulation is exemplified by the case of Ankit Sahni, who attempted to register an AI-generated artwork but was denied by the Indian Copyright Office. The decision underscored the inadequacy of existing intellectual property laws in addressing AI-generated content[3].


Limited Participation from Key Stakeholders


The AI discourse in India is largely driven by investors and marketing leaders, often resulting in half-baked narratives that fail to address holistic questions around AI policy, compute economics, patentability, and productization[1].


The science and research community, along with the startup and MSME sectors, have not actively participated in shaping realistic and effective AI policies. This lack of engagement from key stakeholders has hindered the development of a comprehensive AI ecosystem[4].

Successful multistakeholder collaborations, such as the IEEE's Ethically Aligned Design initiative, demonstrate the value of inclusive policymaking[5]. India must encourage greater participation from diverse groups to foster innovation and entrepreneurship in the AI sector.


Impact of AI on Employment


The impact of AI on employment in India is multifaceted, with varying effects across industries. While AI solutions have shown tangible benefits in B2B sectors like agriculture, supply chain management, and healthcare, the impact on B2C segments such as creative, content, and education remains unclear[1].


A study by NASSCOM estimates that around 9 million people are employed in low-skilled services and BPO roles in India's IT sector[6]. As AI adoption increases, there are concerns about potential job displacement in these segments.


However, AI also has the potential to enhance productivity and create new job opportunities. The World Economic Forum predicts that AI will generate specific job roles in the coming decades, such as AI and Machine Learning Specialists, Data Scientists, and IoT Specialists[7].


To harness the benefits of AI while mitigating job losses, India must invest in reskilling and upskilling initiatives. The government has launched programs like the National Educational Technology Forum (NETF) and the Atal Innovation Mission to promote digital literacy and innovation[8].


As India navigates the impact of AI on employment, it is crucial to approach the long-term implications of RoughDraft AI and GenAI with caution. Policymakers must prioritize addressing the risks associated with AI misuse and its potential impact on the professional capabilities of workers involved with AI systems[1].

By expanding on these key points with relevant examples and trends, the article aims to provide a comprehensive overview of the challenges and considerations surrounding AI policy in India. The next section will delve into potential solutions and recommendations to address these issues.


A Proposal to "Regulate" AI in India: AIACT.IN


The Draft Artificial Intelligence (Development & Regulation) Act, 2023 (AIACT.IN) Version 2, released on March 14, 2024, is an important private regulation proposal developed by yours truly. While not an official government statute, AIACT.IN v2 offers a comprehensive regulatory framework for responsible AI development and deployment in India.AIACT.IN v2 introduces several key provisions that make it a significant contribution to the AI policy discourse in India:

  1. Risk-based approach: The bill adopts a risk-based stratification and technical classification of AI systems, tailoring regulatory requirements to the intensity and scope of risks posed by different AI applications. This approach aligns with global best practices, such as the EU AI Act. Apart from the risk-based approach, there are 3 other ways to classify AI.

  2. Promoting responsible innovation: AIACT.IN v2 includes measures to support innovation and SMEs, such as regulatory sandboxes and real-world testing. It also encourages the sharing of AI-related knowledge assets through open-source repositories, subject to IP rights.

  3. Addressing ethical and societal concerns: The bill tackles issues such as content provenance and watermarking of AI-generated content, intellectual property protections, and countering AI hype. These provisions aim to foster transparency, accountability, and public trust in AI systems.

  4. Harmonization with global standards: AIACT.IN v2 draws inspiration from international initiatives such as the UNESCO Recommendations on AI and the G7 Hiroshima Principles on AI. By aligning with global standards, the bill promotes interoperability and facilitates India's integration into the global AI ecosystem.


Despite its status as a private bill, AIACT.IN v2 has garnered significant attention and support from the AI community in India. The Indian Society of Artificial Intelligence and Law (ISAIL) has featured the bill on its website, recognizing its potential to shape the trajectory of AI regulation in the country.


Now, to disclose, I had proposed this AIACT.IN in November 2023 and then in March 2024 to promote a democratic discourse and not a blind implementation of this bill in the form of a law. The response has been overwhelming so far, and a third version of the Draft Act is in the works already.


However, as I had taken feedback from advocates, corporate lawyers, legal scholars, technology professionals and even some investors and C-suite professionals in tech companies, the feedback that I received was that benchmarking AI itself is a hard task, which even through this AIACT.IN proposal could become difficult to implement due to lack of general understandings around AI.


What to Standardise Then?


Before we standardise artificial intelligence in India, let us configure and understand what exactly can be standardised.


To be fair, standardisation of AI in India is contingent upon the nature of the industry itself. As of now, the industry is at a nascent stage despite all the hype, and the so-called discourse around "GenAI" training. This explains that we are mostly at the scaling up and R&D stages around AI & GenAI, be B2B, B2C or D2C in India.


Second, let's ask - who should be subject to standardisation? In my view - AI standardisation must be neutral of the net worth or economic status of any company in the market. This means that the principles of AI standardisation, both sector-neutral & sector-specific across the aisle, must apply on all market players, in a competitive sense. This is why the Indian Society of Artificial Intelligence and Law has introduced Certification Standards for Online Legal Education (edtech).


Nevertheless, the way AI standards must be developed must have a sense of distinction that it remains mindful of the original / credible use cases that are coming up. The biggest risk of AI hype in this decade is that any random company starts claiming they have a major AI use case, only to find out they haven't tested or effectively built that AI even at the stage of their "solution" being a test case. This is why it becomes necessary to address AI use cases critically.

There are 2 key ways that one can standardise AI and not regulate it - (1) the Legal-Ethical Way; and (2) the Technical Way. None of the means can be opted to discount another. In my view, both methods must be implemented, with caution and sense. The reason is obvious. Technical benchmarking enables us to track the evolution of any technology and its sister and daughter use cases, while legal-ethical benchmarking gives us a conscious understanding of how effective AI market practices can be developed.


Now, it does not mean that the legal-ethical methods of benchmarking on commonsensical principles like privacy, fairness, data quality etc., (most AI standards will naturally be about data protection principles to begin with at first across sectors) must be applied in a rigid, controllable and absolutist way, because an improperly drafted standardisation approach could also be problematic for the market economy, which is still reeling with the scaling and R&D stages of AI.

Fortunately, India already has a full-fledged DPDPA to begin with.


Here's what we have planned for technology professionals, AI & tech startups & MSMEs of Bharat and the Indo-Pacific:

  • The Indian Society of Artificial Intelligence and Law (ISAIL) is launching aistandard.io - a repository of AI-related legal-ethical and policy standards with sector-neutral or sector-specific focus. Members of ISAIL, and of specific committees can wholeheartedly contribute to AI standardisation by suggesting their inputs on standardising AI use cases, solutions, testing benchmarks (legal /policy /technical /all);

  • The ISAIL Secretariat will define a set of rules of engagement to contribute to AI standardisation for professionals and businesses;

  • You can also participate and become a part of the aistandard.io community as an ISAIL member for active participation via paid subscription at indian.substack.com or via manual request at executive@isail.co.in;

  • The Indian Society of Artificial Intelligence and Law will dedicate to invite technology companies, MSMEs and Startups to become their Allied Members soon;

  • This is why, I am glad to state that the Indian Society of Artificial Intelligence and Law in conjunction with Indic Pacific Legal Research LLP will come with relevant standards on AI use cases across certain key sectors in India - in banking & finance, health, education, intellectual property management, agriculture and legal technologies.

  • Our aim would be to propose industry viability standards and not regulatory standards to study basic parameters for regulation, such as (1) inherent purpose of AI systems, (2) market integrity (includes competition law), (3) risk management and (4) knowledge management.

  • Indic Pacific will publish the Third Version of the AIACT.IN proposal shortly;


To begin with, we have defined certain principles of AI Standardisation, which may apply in every case. We have termed these principles as the "ISAIL Principles of AI Standardisation, i.e., aistandard.io".


The ISAIL Principles of AI Standardisation


Principle 1: Sector-Neutral and Sector-Specific Applicability


AI standardization guidelines should be applicable across all sectors and industries, regardless of the size or economic status of the companies involved. However, they should also consider sector-specific requirements and use cases to ensure relevance and effectiveness.


Principle 2: Legal-Ethical and Technical Benchmarking


AI standardization should involve both legal-ethical and technical benchmarking. Legal-ethical benchmarking should focus on principles like privacy, fairness, and data quality, while technical benchmarking should enable tracking the evolution of AI technologies and their use cases.


Principle 3: Flexibility and Adaptability


The standardization approach should be flexible and adaptable to the evolving AI landscape in India, which is still in the scaling and R&D stages. The guidelines should not be rigid or absolutist, but should allow room for innovation and growth.


Principle 4: Credible Use Case Focus


The guidelines should prioritize credible and original AI use cases, and critically evaluate claims made by companies to avoid hype and misleading narratives. This will help ensure that the standardization efforts are grounded in practical realities.


Principle 5: Interoperability and Market Integration


AI standardisation should prioritize interoperability to ensure seamless integration of market practices and foster a free economic environment. Standards should be developed with due care to promote healthy competition and innovation while preventing market fragmentation.


Principle 6: Multistakeholder Participation and Engagement Protocols


The development of AI standards should involve active participation and collaboration from diverse stakeholders, including the science and research community, startups, MSMEs, industry experts, policymakers, and civil society. However, such participation will be subject to well-defined protocols of engagement to ensure transparency, accountability, and fairness. The open-source or proprietary nature of engagement in any initiative will depend on these protocols.


Principle 7: Recording and Quantifying AI Use Cases


To effectively examine the evolution of AI as a class of technology, it is crucial to record and quantify AI use cases for systems, products, and services. This includes documenting the real features and factors associated with each use case. Both legal-ethical and technical benchmarking should be employed to assess and track the development and impact of AI use cases.


From VLiGTA App to IndoPacific App


We have transitioned our technology law, and law & policy repository / e-commerce platform, VLiGTA.App to IndoPacific.App.


We are thrilled to announce a significant evolution in our platform’s journey. 


Say hello to indopacific.app, your essential app for mastering legal skills and insights. This change is driven by our commitment to making legal education more comprehensive and accessible to a broader audience, especially those in the tech industry and beyond.


Why the Change?


🔍 Enhanced Focus and Broader Audience


Our previous platform, vligta.app, was primarily focused on legal professionals. With indopacific.app, we are expanding our horizons to make legal knowledge relevant and accessible to tech professionals and other non-legal fields. Learn how legal skills can empower you, no matter your profession.


🌟 Alignment with Our New Vision and Mission


Our new main tagline, "Your essential app for mastering legal skills & insights," underscores our dedication to being the go-to resource for high-quality, practical legal education. Meanwhile, our supporting tagline, "Empower yourself with legal knowledge, tailored for tech and beyond," highlights our commitment to broader applicability and professional growth.


📈 Improved User Experience and Resources


Enjoy a revamped user interface, enhanced features, and a richer resource library. Dive into diverse content such as case studies, interactive modules, and expert talks that bridge the gap between legal concepts and practical application in various fields.


🌏 Reflecting a Global Perspective


The name indopacific.app signifies our goal to cater to a global audience, particularly in the dynamic and rapidly evolving regions of the Indo-Pacific. We aim to provide universally applicable legal education that transcends geographical and professional boundaries.


What to Expect?


All existing URLs from vligta.app will automatically redirect to the corresponding pages on indopacific.app, ensuring a seamless transition with no interruption in access to our resources.


Join us on this exciting journey as we continue to empower professionals with essential legal skills and insights tailored for the tech industry and beyond. 🌐


 

References

Comments


Commenting has been turned off.
bottom of page