top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

Writer's pictureAbhivardhan

New Report: Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003]

We are more than glad to release another technical report by the VLiGTA team. This report takes a business-oriented generalist approach on AI explainability ethics. We express our gratitude to Ankit Sahni for authoring a foreword to this technical report.

This research is a part of the technical report series by the Vidhitsa Law Institute of Global and Technology Affairs, also known as VLiGTA® - the research & innovation division of Indic Pacific Legal Research.

Responsible AI has been a part of the technology regulation discourse for the AI industry, policymakers as well as the legal industry. As ChatGPT and other kinds of generative AI tools have become mainstream, the call to implement responsible AI ethics measures and principles in some form becomes a necessary one to consider.

The problem lies with the limited and narrow-headed approach of these responsible AI guidelines, because of fiduciary interests and the urge to be reactive towards any industry update. This is exactly where this report comes. To understand, the problems with Responsible AI principles and approaches can be encapsulated in these points:

  • AI technologies have use cases which are fungible

  • There exist different stakeholders for different cases on AI-related disputes which are not taken into consideration

  • Various classes of mainstream AI technologies exist and not all classes are dealt by every major country in Asia which develops and uses AI technologies

  • The role of algorithms in shaping the economic and social value of digital public goods remains unclear and uneven within law

This report is thus a generalist and specificity-oriented work, to address & explore the necessity of internalising AI explainability measures into perspective. We are clear with a sense of perspective that not all AI explainability measures can be even considered limited to the domains of machine learning, and computer science. Barring some hype, there are indeed some transdisciplinary and legal AI explainability measures, which could be implemented. I am glad my co-authors from the VLiGTA team did justice to this report. Sanad Arora, the first co-author of this report, has extensively contributed on aspects related to the limitations of responsible AI principles and approaches. He has also offered insights on the issue of convergence of legal and business concerns related to AI explainability. Bhavana J Sekhar, the second co-author has offered her insights on developing AI explainability measures to practice conflict management when it comes to technical and commercial AI use cases. She has also contributed extensively on legal & business concerns pertaining to the enabling of AI explainability in Chapter 3.


Finally, it has been my honour to contribute on the development of AI explainability measures to practice innovation management, when it comes to both technical and commercial AI use cases. I am glad that I could also offer an extensive analysis on the socio-economic limits of the responsible AI approaches at present.



Recommendations from VLiGTA-TR-003

Converging Legal and Business Concerns

  • Legal and Business concerns can be jointly addressed by XAI where data collected from XAI can be used to address the regulatory challenges and help in innovation, while ensuring accountability on the forefront.

  • Additionally, information from XAI systems can assist in developing and improving specific tailor made risk management strategies and ensure risk intervention at the earliest.

  • Explainable AI tools can rely on prototype models which will have self-learning approaches to adopt and learn model-agnosticexplanations is also highly flexible since it can only access the model’s output.

  • Privacy-aware machine learning tools can also be incorporated into the development of explainable AI tools to avoid possible risks of data breaches and privacy. Compliances may be developed and used for development purposes, including the general mandates that are attributed to them.


Conflict Management

  • Compliance by design may become a significant aspect of encouraging the use of regulatory sandboxes and enabling innovation management in more productive ways as possible. In case sandboxes are rendered ineffective, real-time awareness and consumer education must be done, keeping in mind technology products and services accessible and human-centric by design.

  • Risk Management strategies are advised to be incorporated at different stages of AI life cycle from the inception of Data collection and Data training.

  • De-risking AI can involve model risk assessment by classifying AI model based on its risk (High, low, medium) and its contextual usage which will further assist in developers, stakeholders to jointly develop risk mitigation principles according to the level of risk incurred by AI.

  • Deployment of AI explainability measures will require a level of decentralisation where transdisciplinary teams to work closely to provide complete oversight. Risk monitoring should be carried out by data scientists, developers and KMPs to share overlapping information and improve situational analysis of the AI system periodically.


Innovation Management

  • The element of trust is necessary and the workflow behind the purpose of data use must be made clear by companies.

  • Even if the legal risks are not foreseeable, they can at least make decisions, which de-risk the algorithmic exploitation of personal & non-personal data, metadata and other classes of data & information.

  • These involve technical and economic choices first, which is why unless regulators come up with straightforward regulatory solutions, companies must see how they can minimise the chances of exploitation and enhance the quality of their deliverables and keeping their knowledge management practices much safer.

Comentarios


bottom of page