Search Results
Results found for empty search
- Crafting the Future: Gratitude to DNLU Jabalpur and the Pivotal Role of aiact.in in Shaping AI Governance
At Indic Pacific Legal Research LLP, we are thrilled to extend our heartfelt gratitude to Dharmashastra National Law University (DNLU), Jabalpur, for their remarkable initiative in hosting a Legislative Drafting Competition centered on the "Artificial Intelligence (Development and Regulation) Act." It’s a moment of pride and affirmation for us to witness a leading Indian law school engage with the critical intersection of AI and law—a space we’ve been passionately shaping since our inception in 2019. DNLU’s efforts to nurture innovative legal thinking align beautifully with our mission to foster responsible AI development and governance in India, and we wish them resounding success in this endeavour. This moment also shines a light on the journey of aiact.in —our flagship project, the Artificial Intelligence (Development & Regulation) Act, 2023, spearheaded by our founder, Abhivardhan. Launched in November 2023 with no grand expectations, this privately proposed AI bill has grown into a pivotal resource, inspiring conversations like the one at DNLU. What began as a vision to craft an India-centric framework for AI regulation has, in just over a year, garnered appreciation from developers, judges, and technologists alike. Its strength lies in its feedback-driven approach—offering a practical, adaptable blueprint that stakeholders can refine and build upon. Seeing it spark a legislative drafting competition at DNLU is a testament to its relevance and potential to influence India’s AI policy landscape. For us at Indic Pacific, aiact.in is more than a draft—it’s a cornerstone of our commitment to pioneering technology law solutions with an Indo-Pacific lens. Despite early skepticism (including a dismissive encounter with a law firm that overlooked its originality), this initiative has proven its worth by amplifying Indian perspectives in a global discourse often dominated by Western frameworks. It embodies our ethos of salience, persistence, and adaptivity, driving dialogue among startups, MSMEs, and policymakers. Through our Research & Innovation Division, VLiGTA®, we’ve ensured aiact.in remains a dynamic tool—evolving with insights from industry and academia, as evidenced by its recognition in DNLU’s competition. We’re deeply grateful to DNLU Jabalpur for not only embracing this theme but also acknowledging our efforts in shaping AI governance. Your competition is a powerful step toward building a future where AI is harnessed responsibly, and we at Indic Pacific are honoured to be part of this narrative. Here’s to continued collaboration and innovation—may DNLU’s students and faculty inspire the next wave of legal brilliance!
- The Version 5 of Artificial Intelligence (Development & Regulation) Act, 2023 is Launched
Indic Pacific Legal Research, under the stewardship of Abhivardhan, proudly presents Version 5.0 of the Draft Artificial Intelligence (Development & Regulation) Act, 2023 ( AIACT.in ). This iteration introduces pivotal amendments, with Section 23 leading as a freshly revised cornerstone, alongside updates to Section 7, Section 9, Section 13, Section 20A, and the newly enacted Section 24-A. These changes underscore Indic Pacific’s commitment to ethical, transparent, and inclusive AI regulation in India. Section 23: Content Provenance and Identification (Key Highlight) Indic Pacific has reimagined Section 23 to set a gold standard for AI-generated content. The amendment mandates watermarking with detailed metadata—covering scraping methods, data origins, and licensing—while enforcing ethical data practices limited to consented or public sources. Developers of high-impact systems must secure insurance up to ₹50 crores, ensuring accountability and curbing misuse. This positions Indic Pacific at the forefront of content integrity. Section 7: Strengthened Risk Classification Indic Pacific refines AI risk tiers—Narrow, Medium, High, and Unintended—banning the latter and intensifying scrutiny on High-Risk systems. This amendment safeguards against unpredictable technologies, reinforcing public trust and security. Section 9: Oversight in Strategic Sectors High-risk AI in designated strategic sectors now falls under tailored regulations, with this Act prevailing over conflicting rules. Indic Pacific ensures robust governance where it matters most. Section 13: Enhanced National AI Ethics Code The updated ethics code prioritizes transparency, fairness, and human oversight, offering a clear roadmap for responsible AI. Indic Pacific champions ethical innovation with this refresh. Section 20A: Transparency in Public AI Initiatives Government and partnership AI projects must now disclose objectives, funding, and algorithms, backed by audits and public explanations. Indic Pacific drives accountability in the public sphere. Section 24-A: Right to AI Literacy Introduced A landmark addition, this section grants every individual access to AI literacy—covering concepts, impacts, and recourse options. Indic Pacific empowers citizens for an AI-driven future. These amendments, with Section 23 as the flagship, exemplify Indic Pacific’s vision for a balanced, responsible AI ecosystem. Please give your feedback on this version of the bill at vligta@indicpacific.com .
- Decoding the AI Competency Triad for Public Officials: A Deep Dive into India’s Strategic Framework
The Ministry of Electronics and Information Technology (MeitY) recently launched its AI Competency Framework, aiming to equip public officials with the skills needed to responsibly integrate artificial intelligence into governance processes. Our latest report, "Decoding the AI Competency Triad for Public Officials" (IPLR-IG-014), provides an in-depth analysis of this framework and its implications for India’s public sector. This report is authored by Abhivardhan, Founder & Managing Partner, and interns at the Indian Society of Artificial Intelligence and Law, Yashita Parashar, Sneha Binu, and Gargi Mundotia. 📖 Access the full report here: https://indopacific.app/product/iplr-ig-014/ Why This Framework Matters India is at a pivotal moment in its AI journey, with initiatives like the IndiaAI Mission positioning the country as a global leader in ethical and inclusive AI adoption. The competency framework identifies three core skill areas—behavioral, functional, and domain-specific—that are essential for public officials navigating the complexities of AI governance. Key Highlights from the Report Behavioral Competencies Focuses on systems thinking, ethical governance, and innovative leadership to address complex societal challenges through AI. Functional Competencies Covers practical skills like risk assessment, procurement oversight, and data governance necessary for effective implementation of AI projects. Domain-Specific Competencies Tailored to high-impact sectors like healthcare, education, agriculture, urban mobility, and environmental management. Strategic Recommendations The report also provides actionable insights across three critical legal-policy dimensions: Data Policy Alignment: Ensuring privacy-by-design principles are embedded in every stage of AI deployment. Intellectual Property Management: Addressing gaps in knowledge sharing while safeguarding innovation rights. Accountability & Transparency: Establishing robust oversight mechanisms to ensure ethical use of AI technologies. Who Should Read This? This report is designed for policymakers, entrepreneurs, public officials, and citizens who want to understand how India is building capacity for responsible AI integration while addressing global challenges like bias mitigation and data privacy. 📖 Access the full report here: https://indopacific.app/product/iplr-ig-014/
- ciarb Guideline on the Use of AI in Arbitration (2025), Explained
This insight is co-authored by Vishwam Jindal, Chief Executive Officer, WebNyay. The Chartered Institute of Arbitrators (CIArb) guideline on the use of AI in arbitration, published in 2025, provides a detailed framework for integrating AI into arbitration proceedings. This analysis covers every chapter, highlighting what each includes and identifying potential gaps. Below, we break down the key sections for clarity, followed by a detailed survey note for a deeper understanding. Chapter-by-Chapter Analysis Part I: Benefits and Risks: Details AI's advantages (e.g., legal research, data analysis) and risks (e.g., confidentiality, bias), providing a broad overview. Part II: General Recommendations: Advises on due diligence, risk-benefit analysis, legal compliance, and maintaining accountability for AI use. Part III: Parties’ Use of AI: Covers arbitrators' powers to regulate AI, party autonomy in agreeing on its use, and disclosure requirements for transparency. Part IV: Use of AI by Arbitrators: Allows discretionary AI use for efficiency, prohibits decision delegation, and emphasizes transparency through party consultation. Appendices: Includes templates for AI use agreements and procedural orders, aiding practical implementation. Definitions: Provides clear definitions for terms like AI, hallucination, and tribunal, based on industry standards. On definitions, it could have been better that ciarb could have opted definitions associated on AI from third-party technical forums like IEEE, Creative Commons, ISO etc., instead of IBM. Part I: Benefits and Risks Part I provides a balanced view of AI's potential benefits and risks in arbitration. The benefits section (1.1-1.10) highlights efficiency gains through legal research enhancement, data analysis capabilities, text generation assistance, evidence collection streamlining, and translation/transcription improvements. Notably, section 1.10 acknowledges AI's potential to remedy "inequality of arms" by providing affordable resources to under-resourced parties. The risks section (2.1-2.9) addresses significant concerns including confidentiality breaches when using third-party AI tools, data integrity and cybersecurity vulnerabilities, impartiality issues arising from algorithmic bias, due process risks, the "black box" problem of AI opacity, enforceability risks for arbitral awards, and environmental impacts of energy-intensive AI systems. Benefits Now, AI offers transformative potential in arbitration by enhancing efficiency and quality across various stages of the process: Legal Research : AI-powered tools outperform traditional search engines with their adaptability and predictive capabilities, enabling faster and more precise research. Data Analysis : AI tools can process large datasets to identify patterns, correlations, and inconsistencies, aiding in case preparation. Text Generation : Tools can draft, summarize, and refine documents while ensuring grammatical accuracy and coherence. Translation and Transcription : AI facilitates multilingual arbitration by translating documents and transcribing hearings at lower costs. Case Analysis : Predictive analytics provide insights into case outcomes and procedural strategies. Evidence Collection : AI streamlines evidence gathering and verification, including detecting deep fakes or fabricated evidence. Risks Despite its advantages, AI introduces several risks: Confidentiality : Inputting sensitive data into third-party AI tools raises concerns about data security and misuse. Bias : Algorithmic bias can compromise impartiality if datasets or algorithms are flawed. Due Process : Over-reliance on AI tools may undermine parties' ability to present their cases fully. "Black Box" Problem : The opaque nature of some AI algorithms can hinder transparency and accountability. Enforceability : The use of banned or restricted AI tools in certain jurisdictions could jeopardise the validity of arbitral awards. Limitations in Part 1 Part I exhibits several significant limitations that undermine its comprehensiveness: Incomplete treatment of risks : While identifying key risk categories, the guidelines lack depth in addressing bias detection and mitigation strategies, transparency mechanisms, and AI explainability challenges. Gaps in benefits coverage : The incomplete presentation of sections 1.5-1.9 suggests missing analysis of potential benefits such as evidence gathering and authentication applications. Absence of risk assessment framework : No structured methodology is provided for quantitatively evaluating the likelihood and severity of identified risks, leaving arbitrators without clear guidance on risk prioritisation. Limited forward-looking analysis : The section focuses primarily on current AI capabilities without adequately addressing how rapidly evolving AI technologies might create new benefits or risks in the near future. Part II: General Recommendations The CIArb guidelines emphasise a cautious yet proactive approach to AI use: Due Diligence : Arbitrators and parties should thoroughly understand any AI tool's functionality, risks, and legal implications before using it. Balancing Benefits and Risks : Users must weigh efficiency gains against potential threats to due process, confidentiality, or fairness. Accountability : The use of AI should not diminish the responsibility or accountability of parties or arbitrators. In summary, Part II establishes broad principles for AI adoption in arbitration. It encourages participants to conduct reasonable inquiries about AI tools' technology and function (3.1), weigh benefits against risks (3.2), investigate applicable AI regulations (3.3), and maintain responsibility despite AI use (3.4). The section addresses critical issues like AI "hallucinations" (factually incorrect outputs) and prohibits arbitrators from delegating decision-making responsibilities to AI systems. Part II provides general advice on due diligence, risk assessment, legal compliance, and accountability for AI use. However, it has notable gaps: Lack of Specific Implementation Guidance: The recommendations, such as conducting inquiries into AI tools (3.1), are broad and lack practical tools like checklists or frameworks. For example, it could include a step-by-step guide for evaluating AI tool security or a risk-benefit analysis template, aiding users in application. Insufficient technical implementation guidance : The recommendations remain abstract without providing specific technical protocols for different types of AI tools or use cases. No Examples or Hypothetical / Real Case Studies: Without real-world scenarios or even comparable hypothetical scenarios, such as how a party assessed an AI tool for confidentiality risks, practitioners may struggle to apply the recommendations. Hypothetical examples could bridge this gap, enhancing understanding. Absence of AI literacy standards : No baseline competency requirements are established for arbitration participants using AI tools, creating potential disparities in understanding and application. Missing protocols for AI transparency : The guidelines don't specify concrete mechanisms to make AI processes comprehensible to all parties, particularly important given the "black box" problem acknowledged elsewhere. No Mechanism for Periodic Review: Similar to Part I, there is no provision for regularly updating the recommendations, such as a biennial review process, which is critical given AI's rapid evolution, like the advent of generative AI models. Lack of Input from Technology Experts: The guideline does not indicate consultation with AI specialists or technologists, such as input from organizations like the IEEE ( IEEE AI Ethics ), which could ensure the recommendations reflect current industry practices and technological realities. Part III: Parties’ Use of AI Arbitrators’ Powers Arbitrators have broad authority to regulate parties' use of AI: They may issue procedural orders requiring disclosure of AI use if it impacts evidence or proceedings. Arbitrators can appoint experts to assess specific AI tools or their implications for a case. Party Autonomy Parties retain significant autonomy to agree on the permissible scope of AI use in arbitration. Arbitrators are encouraged to facilitate discussions about potential risks and benefits during case management conferences. Disclosure Requirements Parties may be required to disclose their use of AI tools to preserve procedural integrity. Non-compliance with disclosure obligations could lead to adverse inferences or cost penalties. In summary, Part III establishes a framework for regulating parties' AI use. Section 4 outlines arbitrators' powers to direct and regulate AI use, including appointing AI experts (4.2), preserving procedural integrity (4.3), requiring disclosure (4.4), and enforcing compliance (4.7). Section 5 respects party autonomy in AI decisions while encouraging proactive discussion of AI parameters. Sections 6 and 7 address rulings on AI admissibility and disclosure requirements respectively. Part III contains several problematic gaps: Ambiguity in Private vs. Procedural AI Use: Section 4.5 states arbitrators cannot regulate private use unless it interferes with proceedings, but the boundary is vague. For example, using AI for internal strategy could blur lines, and clearer definitions are needed. Inadequate dispute resolution mechanisms : Despite acknowledging potential disagreements over AI use, the guidelines lack specific procedures for efficiently resolving such disputes. Disclosure framework tensions : The optional nature of disclosure creates uncertainty about when transparency should prevail over party discretion, potentially undermining procedural fairness. Absence of cost allocation guidance : The guidelines don't address how costs related to AI tools or AI-related disputes should be allocated between parties. Limited cross-border regulatory guidance : Insufficient attention is paid to navigating conflicts between different jurisdictions' AI regulations, a critical issue in international arbitration. Potential Issues with Over-Reliance on Party Consent: The emphasis on party agreement (Section 5) might limit arbitrators’ ability to act decisively if parties disagree, especially if one party lacks technical expertise, potentially undermining procedural integrity. Need for Detailed Criteria for Selecting AI Experts: While arbitrators can appoint AI experts, there are no specific criteria, such as qualifications in AI ethics or experience in arbitration, which could ensure expert suitability and consistency. Part IV: Use of AI by Arbitrators Discretionary Use Arbitrators may leverage AI tools to enhance efficiency but must ensure: Independent judgment is maintained. Tasks such as legal analysis or decision-making are not delegated entirely to AI. Transparency Arbitrators are encouraged to consult parties before using any AI tool. If parties object, arbitrators should refrain from using that tool unless all concerns are addressed. Responsibility Regardless of AI involvement, arbitrators remain fully accountable for all decisions and awards issued. In summary, Part IV addresses arbitrators' AI usage, establishing that arbitrators may employ AI to enhance efficiency (8.1) but must not relinquish decision-making authority (8.2), must verify AI outputs independently (8.3), and must assume full responsibility for awards regardless of AI assistance (8.4). Section 9 emphasises transparency through consultation with parties (9.1) and other tribunal members (9.2). Part IV exhibits several notable limitations: Inadequate technical implementation guidance : The section provides general principles without specific technical protocols for different AI applications in arbitrator decision-making. Missing AI literacy standards for arbitrators : No baseline competency requirements are established to ensure arbitrators sufficiently understand the AI tools they employ. Insufficient documentation requirements : The guidelines don't specify how arbitrators should document AI influence on their decision-making process in awards or orders. Absence of practical examples : Without concrete illustrations of appropriate versus inappropriate AI use by arbitrators, the guidance remains abstract and difficult to apply. Underdeveloped bias mitigation framework : While acknowledging potential confirmation bias, the guidelines lack specific strategies for detecting and counteracting such biases. Appendix A: Agreement on the Use of AI in Arbitration Appendix A provides a template agreement for parties to formalize AI use parameters, including sections on permitted AI tools, authorized uses, disclosure obligations, confidentiality preservation, and tribunal AI use1. Critical Deficiencies Appendix A falls short in several areas: Excessive generality : The template may be too generic for complex or specialised AI applications, potentially failing to address nuanced requirements of different arbitration contexts. Limited customisation guidance : No framework is provided for adapting the template to different types of arbitration or technological capabilities of the parties. Poor institutional integration : The template doesn't adequately address how it interfaces with various institutional arbitration rules that may have their own technology provisions. Static nature : No provisions exist for updating the agreement as AI capabilities evolve during potentially lengthy proceedings. Insufficient technical validation mechanisms : The template lacks provisions for verifying technical compliance with agreed AI parameters. Appendix B: Procedural Order on the Use of AI in Arbitration Appendix B provides both short-form and long-form templates for arbitrators to issue procedural orders on AI use, introducing the concept of "High Risk AI Use" requiring mandatory disclosure, establishing procedural steps for transparency, and enabling parties to comment on proposed AI applications. Critical Deficiencies Appendix B contains several notable gaps: Technology adaptation limitations : The templates lack mechanisms for addressing emerging AI technologies that may develop during proceedings. Enforcement uncertainty : Limited guidance is provided on monitoring and enforcing compliance with AI-related orders. Insufficient technical validation : The templates don't establish concrete mechanisms for verifying adherence to AI usage restrictions. Absence of update protocols : No provisions exist for modifying orders as AI capabilities evolve during proceedings. Limited remedial options : Beyond adverse inferences and costs, few specific remedies are provided for addressing non-compliance. Conclusion: Actionable Recommendations for Enhancement The CIArb AI Guideline represents a significant first step toward establishing a framework for AI integration in arbitration, demonstrating awareness of both benefits and risks while respecting party autonomy. However, to transform this preliminary framework into a robust and practical tool, several enhancements are necessary: Technical Implementation Framework : Develop supplementary technical guidelines with specific protocols for AI verification, validation, and explainability across different arbitration contexts and AI applications. AI Literacy Standards : Establish minimum competency requirements and educational resources for arbitrators and practitioners to ensure informed decision-making about AI tools. Adaptability Mechanisms : Implement a formal revision process with specific timelines for guideline updates to address rapidly evolving AI capabilities. Transparency Protocols : Create more detailed transparency requirements with clearer thresholds for mandatory disclosure to balance flexibility with procedural fairness. Risk Assessment Methodology : Develop a quantitative framework for systematically evaluating AI risks in different arbitration contexts. Practical Examples Library : Supplement each section with concrete case studies illustrating appropriate and inappropriate AI applications in arbitration. Institutional Integration Guidance : Provide specific recommendations for aligning these guidelines with existing institutional arbitration rules.
- Does Art Law in India Require Regulation? Maybe.
India, a country with an unparalleled artistic heritage, faces unique legal challenges in regulating its growing art market. While existing laws protect antiquities and govern intellectual property, the lack of a dedicated regulatory body for art has led to gaps in dispute resolution, authentication, taxation, and trade compliance. Moreover the rise of digital art and NFTs has introduced complexities and intricacies that Indian laws are yet to fully address and dealt with. Without proper oversight, artists, collectors, and investors navigate a market that is often ambiguous and vulnerable to exploitation. This article highlights these pressing issues and the crucial role of arbitration, mediation, and regulatory reforms in shaping a more structured and secure art ecosystem. India's Art Industry at Loggerheads? The Indian art industry remains largely unregulated, leading to issues such as forgery, misrepresentation, and an unclear dispute resolution mechanism. Without a formal authentication authority, buyers and collectors often struggle to verify the provenance of artworks, increasing the risk of fraud, duplicate art works . This lack of oversight has allowed counterfeit artworks to flood the market, eroding trust and making transactions riskier for both buyers and sellers. Adding to these concerns is the absence of regulated pricing and taxation policies, making it difficult for artists and buyers to navigate legal obligations. Unlike other industries that benefit from structured oversight, art transactions in India remain fragmented, leading to inconsistent taxation and compliance challenges. Many deals occur in informal markets, where tax evasion and opaque pricing structures prevail. Without a dedicated Art Regulatory Authority, buyers rely on informal networks for provenance verification, and disputes often escalate into prolonged litigation. The lack of streamlined governance and regulations in the art market highlights the requirement for a structured regulatory framework that can ensure transparency, fairness, and accountability in all aspects of art trade and ownership. In India however, art arbitration looks at an interplay between, intellectual property rights and arbitration laws. As per the Arbitration and Conciliation Act, awards are unenforceable if they arise out of an “in-arbitrable” dispute. Art disputes involve issues of ownership, authenticity and copyright infringement, succession, and testamentary matter, therefore are often contested as being in-arbitrable Art disputes often involve complex issues, including authorship claims, forgery allegations, and breach of contractual terms. Given the time-consuming nature of traditional litigation, arbitration and mediation have become preferred modes of dispute resolution in the global art market. These mechanisms provide a faster, more cost-effective, and confidential approach to resolving conflicts without jeopardizing artistic or commercial relationships. Mediation allows parties to reach a mutually acceptable settlement while preserving professional relationships. This is particularly useful in cases involving artist-gallery disputes, copyright infringements, and ownership claims. A mediated resolution ensures that creative partnerships remain intact, preventing long legal battles from hindering artistic growth. Arbitration, on the other hand, ensures confidential, specialised, and enforceable decisions, making it ideal for high-value transactions. Art-related disputes often involve international parties, and arbitration provides a neutral forum for resolution. Institutions such as the Delhi International Arbitration Centre (DIAC) and Mumbai Centre for International Arbitration (MCIA) have begun handling art-related disputes, yet India still lacks dedicated arbitral rules for art transactions. By integrating alternative dispute resolution mechanisms into the art industry, India can ensure faster dispute resolution, and stronger legal safeguards for artists, collectors. With the rise of blockchain technology, digital art and NFTs (Non-Fungible Tokens) have opened new avenues for artists to monetise their work. However, Indian law remains silent on key aspects, leading to challenges in ownership rights, resale royalties, and tax implications. One of the biggest concerns is ownership rights who holds the copyright for an NFT, the artist or the buyer? Traditional art markets recognize artists' rights to their works, but if we talk about in the digital space, the legal standing of NFT ownership is still unregulated . More of that, there is ambiguity surrounding resale royalties, where artists often do not receive reward and compensation when their NFT is resold in the secondary market. In the absence of clear legal provisions, artists are often at the mercy of marketplace policies. Tax implications also remain undecided. Are NFTs classified as goods, securities, or digital assets under Indian law? The lack of proper classification results in taxation challenges, leaving buyers and sellers in a legal gray area. Without a defined legal framework, NFT transactions could potentially fall under multiple tax regulations, leading to confusion and unintended liabilities. A lack of regulation has led to instances of digital art theft, plagiarism, and unauthorized commercial use, leaving artists vulnerable. The rise of AI-generated art and digital manipulation further complicates the legal landscape. The international art trade is heavily regulated, and India has multiple laws governing the import and export of artworks. However, enforcement gaps have led to a thriving underground market where valuable artifacts bypass legal scrutiny. The Foreign Exchange Management Act (FEMA) 1999 governs cross-border transactions. Restrictions on foreign direct investment (FDI) in the art sector limit global collaborations, while compliance with Reserve Bank of India (RBI) regulations is mandatory. The Goods and Services Tax (GST) applies to artworks. Original paintings and sculptures attract 12% GST, while prints and reproductions are taxed at 18% GST (Ministry of Finance, Government of India). High taxes encourage informal trade and underreporting, impacting transparency. The Consumer Protection Act, 2019 protects buyers from misrepresentation and fraud, particularly in online sales (Department of Consumer Affairs, India). However, lack of a formal certification authority makes enforcement difficult. The Customs Tariff Act, 1975, governs import duties and requires special permits for antique exports (Central Board of Indirect Taxes and Customs). Stronger inter-agency collaboration is needed to curb illegal art trafficking and reclaim stolen heritage. Conclusion Art law in India is at a crossroads, requiring urgent regulatory intervention to balance cultural preservation with modern commercial needs. By establishing a dedicated regulatory body, modernizing legal frameworks, and integrating alternative dispute resolution mechanisms, India can create a more structured and globally competitive art market.
- Excited to share: The Indo-Pacific Principles of Legal & Policy Writing - Our Blueprint for Tech Law Excellence! ✨📊🤖
At Indic Pacific Legal Research, we're thrilled to present a set of guiding writing standards crafted to elevate legal and policy communication in the tech space. Get the writing guidelines at indicpacific.com/guidelines . In today's complex AI governance landscape, clear communication isn't just nice-to-have - it's essential. 🔍 Why these principles matter now more than ever: 1️⃣ Precision Over Prolixity ⚡ - As India develops its AI regulatory framework (like MeitY's recent report on AI Governance Guidelines), our work requires communication that cuts through complexity. Every word must earn its place! 2️⃣ Nuance and Novelty Matter 💡 - Our projects demonstrate our commitment to original thinking over redundant reviews. 3️⃣ Be Unassailable 🛡️ - Our consultancy work demands arguments sharp enough to cut through noise yet grounded in reality - essential when advising on AI governance frameworks. 4️⃣ Clarity Is Authority 📣 - We've learned that complex tech law ideas demand simple expressions. If readers struggle, we haven't mastered our craft! 5️⃣ Visuals Amplify Words 📊 - Our "Graphics-powered Insights" service exemplifies how diagrams and visuals can enhance understanding of complex AI governance issues. 6️⃣ Always Anchor in Relevance 🎯 - Our approach to "whole-of-government" AI regulation demonstrates how every idea must drive home a purpose. 7️⃣ Respect the Reader's Time ⏱️ - We prioritise purposeful precision that both informs and engages. These principles guide our advisory work with tech companies and government stakeholders as we navigate India's evolving AI ecosystem. They're not just writing rules - they're the foundation of responsible tech governance! 🌐 As our founder Abhivardhan says: "Complex ideas demand simple expressions." This philosophy powers our work in technology law, AI governance, and policy development across India and beyond. What principles guide YOUR communication in the tech policy space? Share below! 👇
- New Publication: Artificial Intelligence and Policy in India, Volume 6
Proud to announce our latest publication: "Artificial Intelligence and Policy in India, Volume 6," edited by Abhivardhan! 🎉📘 This research collection represents our continued commitment to exploring the frontier of AI governance and implementation in India. 🇮🇳🤖 Read this collection at https://indopacific.app/product/artificial-intelligence-and-policy-in-india-volume-6-aipi-v6/ In collaboration with the Indian Society of Artificial Intelligence and Law (ISAIL), we've brought together four exceptional papers from talented ISAIL interns: 🔹 Rasleen Kaur Dua tackles ethical and regulatory challenges in AI-driven supply chains 🔹 Parvathy Arun explores how algorithms are revolutionizing financial trading 🔹 Oshi Yadav investigates blockchain's transformative role in our digital economy 🔹 Eva Mathur examines how legal education must evolve in the age of technology This volume is essential reading for anyone interested in understanding how AI is reshaping India's policy landscape across multiple sectors. 📊⚖️💡 Available now! Tag someone who needs this resource in their professional library.
- New Report: Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001
We are eager to release the Vidhitsa Law Institute's first technical report, on artificial intelligence hype and its legal-economic risks. Bhavana J Sekhar, Principal Researcher and Poulomi Chatterjee, Contributing Researcher have co-authored this report with me. In this work, we have addressed the issue of hype cycles caused by artificial intelligence technologies in detail. This report is an initial research contribution developed by the team of Vidhitsa Law Institute of Global and Technology Affairs (VLiGTA) as a part of the efforts in the Artificial Intelligence Resilience department. We have continued our work which we had started in the Indian Society of Artificial Intelligence and Law (ISAIL) since 2021, on formalising ethics research on the trend of Artificial Intelligence hype. In my discussions and consultations with Dr Jeffrey Funk , a former Faculty at the National University of Singapore, Bogdan Grigorescu , a tech industry expert and an ISAIL Alumnus and Dr Richard Self from the University of Derby, I realised that it is necessary to cater to encapsulate the scope and extent of Artificial Intelligence hype beyond competition policy and data privacy issues, which many developed countries in the D9 group of countries have already faced. Many technology companies inflate their valuations and use Artificial Intelligence to hype their products and services’ value. This however can be done by influencing stocks, distorting perceptions, misdirecting demand and credibility concerns and other methods as well. The key to the exploitative nature of AI hype as we know them is based on the interconnectedness of the information and digital economy and how minuscule economic and ethical innovations in AI as a technology, can be abused. Bhavana’s Market Analysis is succinct and focuses on the points of convergence and Poulomi’s evaluation of the ethics of Artificial Intelligence is appreciated. I express my special regards to Sanad Arora from the Vidhitsa Law Institute and Ayush Kumar Rathore from Indic Pacific’s Technology Team for their moral support. Some of the key aspects discussed in report are about the perpetuation of the hype cycles and their formalisation in the legal rubric for regulators. We have also focused with a soft law perception to address larger economic and technical issues and offered recommendations. Based on our research, we have formulated seven working conditions to determine artificial intelligence hype , which are based on a set of stages: Stage 1: Influence or Generation Determination An Artificial Intelligence hype cycle is perpetuated to influence or generate market perception in a real-time scenario such that a class of Artificial Intelligence technology as a product / service is used in a participatory or preparatory sense to influence or generate the hype cycle. Stage 2: Influencing or Generating Market Perceptions & Conditions The hype cycle may be continuous or erratic, but the real-time impact on market perceptions which affect the market of the product / services involving Artificial Intelligence technologies, as estimated from a standardised / regulatory / judicial / statutory point of view. The hype cycle may directly or indirectly perpetuate the course of specific anti-competitive practices. Beyond the real-time impact on market perceptions, the consecutive effects of the real-time impact may distort a limited set of related markets, provided that the specific anti-competitive practices are furthered in a distinct pattern. Stage 3: Uninformed or Disinformed Markets The features of the product / service subject to hype cycle are uninformed / disinformed to the market. It may be stated that misinforming the market may be construed as keeping the market just uninformed, except not in mutually exclusive cases. Stage 4: Misdirected Perceptions in the Information & Digital Economy The hype cycle may be used to distract the information economy by converting the state of being uninformed or disinformed into misdirected perception. This means that the hype cycle about a product or service may not clarify certain specifics and may cause the public or market players to distract their focus towards ancillary considerations, to comfortably ignore the fact that they have being uninformed or disinformed. Stage 5: Estimation of the Hype Cycle through Risk Determination In addition, even if preliminary clarifications or assessments are provided to the market, the lack of due diligence in determining the inexplicable features of the Artificial Intelligence technology in any form or means as a part of the product or service involves the assessment of the hype cycle with a risk-centric approach. Further interpretation and explanations have been provided in the report. Recommendations in this Report Companies must make it clear to the regulatory bodies on the investment and ethical design of the products and services which involve narrow AI and high-intensive AI technologies. Maintaining efficient knowledge management systems catering to IP issues is important. It is essential that the economic and ethical repercussions of the biproducts of knowledge management are addressed carefully due to the case that many Artificial Intelligence technologies still would remain inexplicable due to reasons including ethical ambiguity. If Artificial Intelligence technologies are included at any managerial level groups, departments and divisions, which also includes the board of directors for consultative, reliance or any other tangible cause, then regardless of their attribution to the knowledge management systems maintained by the company itself, including concerns on intellectual property, a risk-oriented practice of maintaining legitimate and viable transparency on issues around data protection & privacy and algorithmic activities & operations must be adopted. Regulators can adopt for self-regulatory directives or solutions. In case regulatory sandboxes are necessary to be used, there must be separate guidelines (since they are not products or services) for such kinds of technologies by virtue of their use case in the realm of corporate governance. The transboundary flow of data, based on some commonalities of ethical and quality assessment, can be agreed amongst various countries subject to their data localisation and quality policies. When it comes to Artificial Intelligence technologies, to reduce or detect the impact and aftermath of Artificial Intelligence hype cycles – governments must negotiate on agreeing for an ethical free flow of data and by mapping certain algorithmic activities & operations which affect public welfare on a case-to-case basis. We propose that the Working Conditions to Determine Artificial Intelligence Hype can be regarded in a consultative sense a framework to intermix competition policy and technology governance concerns, by various stakeholders. We are open to consultation, feedback and alternate opinions. We also propose that the Model Algorithmic Ethics Standards (MAES) to be put into use, so that some estimations, can be made at a preliminary level as regulatory sandboxes are subject to procurement. The Report is available here . Price: 200 INR
- The UK Government Brief on AI and Copyright Law (2024), Explained
The author of this insight was a Research Intern at the Indian Society of Artificial Intelligence and Law. Made via Luma AI. The UK economy is driven by many creative industries, including TV and film, advertising, performing arts, music publishing and video games contributing nearly 124.8 billion GVA to the economy annually. The rapid development of AI over the recent years has sparked a debate globally and within the UK about various challenges and opportunities it brings. It led to massive concerns within the creative and media industries about their work being used to train AI without their permission and media organizations not being able to secure remuneration to licensing agreements. There has also been a lack of transparency from the AI developers about the content that is being used to train the models while these firms also raise their own concerns about the lack of clarity over how they can legally access the data to train the models. These concerns are hindering AI, adoption, stunting innovation, and holding back the UK from fully utilizing the potential AI holds. The UK government consultation document highlights the need for working in partnership with both the AI sector and media sector ensuring greater transparency from AI developers, to build trust between developers and the creative industry. Focus Areas of the Consultation The key pillars of the UK government focused on the approach to copyright and AI policy include transparency, technical standards, contracts and licensing, labelling, computer generated works, digital replicas and emerging issues. The government aims to tackle the challenges with AI in terms of copyright by ensuring that AI developers are transparent about the use of training data for their AI models. The government seeks views on the level of transparency required to ensure that there is a trust built between AI companies and organisations in the creative industry. Establishing technical standards will help improve and standardise the tools, making it easier for creators and developers to exercise their reserving rights. Moreover, licensing frameworks need to be strengthened to ensure that the creators receive fair remuneration while the AI developers also get access to necessary training material. Labelling measures help distinguish the AI generated content from the human created work, which will foster clarity for consumers. Additionally, the protection of consumer generated work needs to align with modern AI capabilities so that fairness is ensured. Finally, addressing digital replicas, such as deepfakes is essential to protect individuals’ identity from misuse. Figure 1: Key pillars of Copyright and AI policy Overcoming Challenges in AI Training and Copyright Protection The government’s consultation document looks at the problem of using copyrighted works to train AI models. AI developers use large amounts of data, including copyrighted works, to train their models but many creators don’t get paid for the use of their work. The consultation highlights the issue of transparency as creators often don’t know if their work is in the AI training datasets. The government acknowledges the conflict between copyright law and AI development especially when AI outputs reproduce substantial parts of copyrighted works without permission which could be copyright infringement. The Getty Images vs Stability AI case is being debated but it may take years to resolve. The government is looking at legislation to clarify the rules around AI training and outputs to get the balance right between creators and AI developers. Figure 2: A Venn Diagram discussing intersectional aspects around AI Training & Data Mining and Copyright Ownership & Creator Rights Exceptions with rights reservation Key features and scope The data mining exception and rights reservation package that is under consideration would have features pertaining to increased transparency by AI firms in use of training data, ensuring right holders get fair payment upon use of their work by AI firms and addresses the need for licensing. The proposed solutions aim to regulate data mining activities ensuring lawful access to data building trust and partnership between AI firms and media and creative organisations. Figure 3: Proposed exceptions to Data Mining and its Scope. Addressing Challenges in Developing and Implementing Technical Standards There is a requirement and growing need for standardization for copyright and AI so publishers of content on the Internet can reserve the rights while AI developers have access to training data that does not infringe on the rights of publishers. Regulation is needed to support the adoption of such standards, which will ensure that protocols are recognised and complied with. There are multiple generative AI web crawlers that flag data unavailable for training to the developer. Many firms and data set owners also keep themselves open to be notified more directly by organisations if they don’t want their work to be used for training an AI model. However, even the most widely adopted standard, which is the robots.txt cannot provide granular control over the use of works that right holders seek. Robots.txt does not allow a massive degree of control because content that is being used for search indexing or language training may not be recognized for generative AI. The consultation proposes the need for standardisation that ensure that developers have legal access to training data and that protocols protecting data privacy of content are met. Figure 4: Key focus areas to achieve technical standardisation Contracts and licensing Contracts and licensing for AI training often involve creators licensing their works through collective management organizations (CMOs) or directly to developers, but creators sometimes lack control over how their work is used. Broad or vague contractual terms and industry expectations can make it challenging for creators to protect their rights. CMOs play a crucial role in efficiently licensing large collections of works, ensuring fair remuneration for creators while simplifying access for AI developers. However, new structures may be needed to aggregate and license data for AI training. The government aims to support good licensing practices, fair remuneration, and mechanisms like text and data mining (TDM) exceptions to balance the needs of right holders and AI developers. Additionally, copyright and AI in education require consideration to protect pupils’ intellectual property while avoiding undue burdens on educators. Ensuring Transparency: Tackling Challenges in Openness and Accountability Transparency is crucial for building trust in AI and copyright frameworks. Right holders face challenges in determining whether their works are used for AI training, as some developers do not disclose or provide limited information about training data sources. Greater transparency can help enforce copyright law, assess legal liabilities, and foster consumer confidence in AI systems. Potential measures include requiring AI firms to disclose datasets, web crawler details, and compliance with rights reservations. However, transparency must be balanced with practical challenges, trade secret protections, and proportionality. International approaches, such as the EU’s AI Act and California’s AB 2013, offer insights into implementing effective transparency standards, which the UK will consider for global alignment. Enhancing Accountability Through Effective AI Output Labelling Standards Labelling AI-generated outputs enhances transparency and benefits copyright owners, service providers, and consumers by providing clear attribution and informed choices. Industry initiatives like Meta’s ‘AI info’ label exemplify current efforts, but consistent regulation may be needed to ensure uniformity and effectiveness. Challenges include defining the threshold for labelling, scalability, and preventing manipulation or removal of labels. International developments, such as the EU AI Act’s rules for machine-readable labels, offer valuable insights. The UK government will explore supporting research and development for robust labelling tools to promote transparency and facilitate copyright compliance. Figure 5: AI Labelling, depicted. Navigating Challenges in Regulating Digital Replicas The use of AI to create “digital replicas” of actors and singers—realistic images, videos, and audio replicating their voice or appearance has raised significant concerns within the creative industries. These replicas are often made without consent, using AI tools trained on an individual’s likeness or voice. Existing protections in the UK, such as intellectual property rights, performers’ rights under the CDPA 1988, and data protection laws, offer some control over the misuse of personal data or unauthorized reproductions. However, concerns remain about AI’s ability to imitate performances or create synthetic reproductions, prompting calls for stronger legal protections, such as the introduction of personality rights. The government acknowledges these concerns and is exploring whether the current legal framework adequately protects individuals’ control over their personality and likeness while monitoring international developments, such as proposed federal laws in the US. Policy Analysis and The Way Ahead The UK government's Copyright and AI consultation is a critical moment for policy to strike the balance between technological innovation and the protection of creative industries, generally, the proposal aims to solve a complicated thicket of legal issues on AI model training. This would allow access to copyrighted works by AI developers unless rights holders specifically opt out, addressing considerably grey areas of uncertainty that still lurk over AI developments. The consultation accepts that the fast development in technology no longer fits very well with the existing copyright framework, thus putting the UK in danger of losing its edge in the field of global AI innovations. An opt-out mechanism in place for copyright rules would help stimulate policymakers who otherwise could not be sure how to protect intellectual property in an environment conducive to technological improvements. Creative industries express grave concerns that unlicensed use of their works by new AI firms, arising from a notion of fair use protections, will undermine personal ownership. AI companies counter that without complete access to the training data required for the construction of sophisticated models of machine learning, through either licensing or exceptions, they won't be able to continue with their work. The intentions of these consultations are to find some common ground, a solution which looks set to simultaneously ensure AI's continued development and provide some control and possible remuneration to content creators that would help de-escalate conflicts between these two groups. Arising out of a more long-term vision, these consultations represent the beginning of an attempt to get ahead of the curve in shaping copyright law, technology development, and IP issues in an increasingly AI-governed world. References UK Government. (2021, December 16). Copyright and artificial intelligence . GOV.UK . Retrieved December 25, 2024, from https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence
- Character.AI, Disruption, Anger and Intellectual Property Dilemmas Ahead
The author is currently a Research Intern at the Indian Society of Artificial Intelligence and Law. Made with Luma AI. What is Character.AI and What is the Mass Deletion Event? Imagine having your personal Batman, Superman, Iron Man, or even Atticus Finch- someone you can interact with at any moment. Character.AI has turned this dream into reality for man, especially within fandom communities. Character.AI is an artificial intelligence (AI) platform through which users interact with and create AI-powered chatbots, based on either fictional or real people. Since its launch in 2021, the platform has gained a significant traction among the fandom communities and it has become a go-to platform to explore interactions with their favorite, often fictional, characters. However, the platform’s user base isn't just limited to fandom communities, it also extends over to people interested in history, philosophy, literature, and people with other niche interests. Character.AI also enjoys an advantage that is available only to very few platforms, and that is: a diverse user-base . This includes everyone from serious interests, to simple casual explorers. Users from fandom communities saw the platform as a new way to engage with their favorite characters. Character.AI also enjoys a good demographic advantage, where the majority of Character.AI users are located in the United States, Brazil, India, Indonesia and then the United Kingdom. However Character.AI also has been surrounded in its fair share of controversies, including the latest one where they carried out a mass deletion drive involving copyrighted characters, and raising concerns over copyright infringement, platform liability, and platform ethics in the context of AI generated content. Overview of Character.AI ’s platform and user base Character.AI ’s core value proposition lies in enabling users to interact with AI-powered chatbots designed to stimulate lifelike conversation. These chatbots reflect diverse personalities, conversational styles and traits unique to the character upon which the chatbot was trained, making the platform particularly popular for role-playing with favorite characters, and story-telling. At the heart of it all, Character.AI is a conversational AI platform that hosts a wide range of chatbots, and gives the users the ability to either interact with existing characters or create their own, and customize the characters personalities, and responses. Character.AI boasts a diverse user base with a chunk of them falling within the 18-23 age group. The composition of its user demographics is visually represented in the following figure: Figure 1: Age distribution of Character.AI Visitors The platform hosts a wide range of characters, including historical figures,celebrities, fictional characters, and even dungeon masters. This makes it accessible to people belonging to different age groups. It is also quite evident that the majority of its user base stems from the 18-24 age group. Also the combined user base of the age group of people belonging under the age group of 44 years make up for 89.84 percent of its user base. Summary of the mass deletion of copyrighted characters In the month of November 2024, Character.AI carried out a mass deletion drive of AI chatbots that were based on copyrighted characters from various franchises, including "Harry Potter,"Game of Thrones," and "Looney Tunes." The company announced that the deletions were the result of the Digital Millennium Copyright Act (DMCA) as well as copyright law. However, the company did not explain why they did this or whether they were proactively engaged in a dialogue with the copyright holders, vis-a-viz Warner Bros Discovery. Interestingly, users were not officially notified about these deletions but only came to know about the situation through a screenshot that was circulating online. The removals were countered with a strong backlash from the user community, in particular those within fandom cultures that have invested time and emotional energy in interacting with these AI characters and of people who share similar interests that have put a lot of enthusiasm and effort into their interactions with these AI characters. The removal of popular familiar figures such as Severus Snape, who had clocked 47.3 million user chats, has caused the fandom community to be in turmoil and has at the same time made people doubt the future of Character.AI and its relationship with copyrighted content. Initial user reactions and impact on the fandom community The initial reactions from users highlighted their frustrations, disappointment, discontentment, anger and upset. Some users considered migrating to different AI platforms, as the deletions have sparked discussions about the balance between copyright protection and creative expression within AI Platforms. Many users expressed their disappointment over a lack of prior notice regarding the deletion drive. One user remarked : “at least a prior notice would be nice. This allows us to archive or download the chats at the very least. Also, I earnestly hope you finally listen to your community. Thank you!”. While others criticized the unprofessionalism of the situation where the platform communicates the news two days after the deletion drive that has already occurred. While some users also acknowledged and in some ways already knew the potential reasons behind the deletion drive- recognizing the need for Warner Bros. Discovery to protect their IP’s from potential controversies- they were mostly concerned about the lack of transparent communication and the absence of any heads up. Copyright Law and AI-Generated Content The mass deletion on Character.AI highlights the complex legal issues in dealing with copyright law and AI-generated content. The use of copyrighted characters in AI chatbots raises concerns around copyright infringement, fair use, and the responsibilities of the AI platform regarding intellectual property rights. Analysis of copyright infringement claims in AI-generated chatbots Intellectual Property laws and particularly the copyright law, essentially grants exclusive rights to copyright holders , including the right to reproduce, distribute, licence, and create derivative works, based on their original creative works. The emergence of AI-chatbots and conversational AI in general presents a complex conundrum, where they potentially infringe upon these exclusive rights when they reproduce the exclusively protected elements of those characters, personalities, appearances, storylines, conversational styles, ideologies, and simply put those characters in their entirety. However, dealing with copyright infringements in the realm of AI-generated content is not an easy legal problem to overcome. Since matters pertaining to this realm are still pending in courts , and there are limited precedents to establish a responsible discourse. All of this gets even more complicated by the fact that the Large Language Models (LLMs) which power these AI systems, do not simply copy and present the content. Instead, they analyze vast data points to learn patterns to generate works inspired no by a single copyright holder, but everyone. Courts will need to consider factors such as the extent to which the AI chatbot copies protected elements of the copyrighted characters, the purpose, and the potential impact on the market for the original work. This mind map below gives a comprehensive examination of the fair use legal arguments with respect to AI Training. Figure 2: Analysis of Fair Use in AI Training, using a mind map. Discussion of the Digital Millennium Copyright Act (DMCA) Implications The Digital Millennium Copyright Act (DMCA) provides a safe harbor framework that protects online from liability for copyright infringement by their users, provided that certain conditions are met. These Conditions are illustrated for your reference in Figure 3.The DMCA also carries significant implications for platforms like Character.AI , requiring them to establish mechanisms for addressing infringement claims. This includes responding to takedown notices from copyright holders and proactively implementing measures to prevent potential infringements. However, the applications of the DMCA to AI-Generated content remains underdeveloped, leaving unanswered questions about how the notice and takedown systems can effectively address the unique challenges posed by the future of AI-generated content. Figure 3: DMCA Safe Harbour Compliances, depicted. Platform Liability and Content Moderation This mass deletion on Character.AI raises pertinent questions about the legal duties of AI platforms to moderate content and prevent harm. As these AI chatbots become ever more capable and able to produce increasingly lifelike, immersive experiences, it poses a tremendous challenge on such platforms as ensuring the safety of users, protecting intellectual property rights, and living up to various legal and ethical standards. Exploration of Character.AI ’s legal responsibilities as a platform. Character.AI , like other online platforms, bears a legal responsibility towards the users and society at large. These include protecting user privacy, preventing harm, and complying with the law of the land. Policies and guidelines in the terms of service of Character.AI deal with the dos and don'ts regarding user behaviour, content, and intellectual property rights. However, the specific legal obligations and the extent to which platforms should be held liable for content generated by their users or the actions of their chatbots are still evolving. The recent lawsuit against Character.AI involving an issue such as a wrongful death case regarding a teenager’s suicide after forming a deep emotional attachment with a 'Daenerys Targaryen'-inspired Chatbot, underscores the potential risks of conversational AI and specifically, character based conversational AI. The lawsuit alleges negligence, wrongful death, product liability, and deceptive trade practices, claiming that Character had a responsibility to inform users of dangers related to the service, particularly a dangerous threat to children. Aside from the legal responsibilities, Character.AI also grapples with ethical issues involving bias within the training data, ensuring prevention of black-boxisiation of their conversational AI models, and establishing accountability for actions and impacts of AI systems. These ethical concerns are critical in their own right and must be addressed as proactively we seek to innovate. Here's an evaluation of proactive vs. reactive content moderation strategies as depicted in the figure below. Figure 4: Comparison of Reactive and Proactive Content Moderation Comparison with other AI platforms approaches to copyrighted content Different AI platforms have adopted differing approaches towards copyright content management. Some of the platforms strictly enforce the policies against the use of the copyrighted characters, whereas others have taken a more permissive approach, allowing for users to create and interact with AI chatbots based on copyrighted characters under certain conditions. For example, Replika and Chai focused on the creation of novel AI companions rather than replicating pre-existent characters to minimise the issue of copyright. NovelAI on the other hand has implemented features, such as the ability for users to generate content that is based on copyrighted works but within limitations and safeguards to avoid copyright violations. User Rights and Expectations in AI Fandom Spaces In the complex scheme of things where copyrighted content is utilized to train large language models(LLMs), that is merely a derivative of the original work, and where users further refine these models through prompting to get a more personalized experience and interact with those that they couldn't interact with in real life. Thus, a new dynamic emerges , one where there are unreasonable set expectations. This dynamic becomes even more critical especially when companies are not doing their part in making their users aware about the limitations of their conversational AI models that the users want to experience. Users then invest significant time, creativity and emotional energy in fine tuning and interacting with these models. All the interactions that people have had with the models has helped it to be better and improved. They have contributed to the success of those chatbots and also helped in creating personalized experiences for others. The Initial reaction to the abrupt deletion of chatbots by the platforms has highlighted the basic expectations of core users, particularly the need to have some form of control or say over the deletion of those chatbots and the data generated during interactions and receiving prior notice, so that they could exercise their ability to archive conversations before they are removed. It is crucial to understand here that it’s not just about the energy they have spent in crafting the personalized conversations they had with the chatbots, but also the comfort they sought, the ideas they had, and the brainstorming they did with those chat bots. Examination of user generated content ownership in AI environments One question and a major concern of the users of conversational AI, for the future technology law jurisprudence is whether users of chats based on LLMs are also in-part copyright holders of the chats between them and the characters they are interacting with. Since platforms like Character.AI allow for users to have private, personalized conversations, that are often unique to the input prompts, and also given that the users now can share their chats with others, giving it the status of published works, complicating the issue of ownership even further. Character AI’s Terms Of Services (TOS) provide that users retain ownership of their characters and by extension, the generated content. However the platform reserves a broad and a sweeping license to use this content for any purpose, including commercial use. This convenient arrangement gives rise to the potential for Character.AI to commercially benefit from user generated content, without compensation or recognition of not only the User generated derivative content but as a matter of fact, the original copyrighted works itself. Discussion of user expectations for persistence of AI characters When it comes to deletion of characters, the TOS of Character.AI is broad and sweeping. It states that Character.AI reserves the right to terminate accounts and delete content for any and multiple reasons, including inactivity or violation of the TOS, often without prior notice. The lack of transparency into content moderation has an overpowering impact and consequence particularly when there can be severe emotional consequences for those who rely on these characters for emotional and mental support. The ethical implications of this opaque policy can be amplified in the context of fandom, where fans tend to be generally dependent on the parasocial relationships they tend to enjoy with their fictional characters. In addition to that the TOS also provides for the following provision: “You agree that Character.AI has no responsibility or liability for the deletion or failure to store any data or other content maintained or uploaded to the Services”. The provision of these terms only exacerbates the asymmetry between the control, influence and certainty which the users expect and the powers that company wants to exercise unquestionably. These terms not only neglect the user rights, but also fail to address the ethical concerns like transparency and fair moderation. Analysis of potential terms of service and user agreement issues. Character.AI ’s terms of services provide for several contentious provisions and they include as depicted in the figure below: Figure 5: Character.AI 's contentious policies, depicted. These provisions of the TOS raises several legal and policy concerns, including the broad and sweeping disregard of user expectations only highlights the need for a more balanced approach that protects user rights while still allowing for innovation and the responsible use of Conversational AI. This is even more pertinent especially in the context of conversational AI systems where users rely on platforms for emotional validation, support, and interactions. And where the consequences could be of a higher magnitude for the user than the other way around. Ethical Considerations in AI-Powered Fandom Interactions Exploration of parasocial relationships with AI characters One significant concern that has emerged since the advent of conversational AI and especially personalized and personality based conversational AI is the development of Parasocial Relationships . Parasocial relationships refer to one sided attachments and connections where individuals develop emotional attachments to fictional and media personalities.The development of emotional bond and attachments to these is an even more common occurrence in the fandom spaces. Within fandom communities, where people are already emotionally invested in their favorite characters and universe, for them, such relationships come on par with the reality they live in, sometimes exceeding real-life relationships. The introduction of Conversational AI , further intensifies these relationships and dynamics, since the interactions only become personalized, interactive, and more so real-wordly. Character.AI has the option to call your personal 'Batman', 'Harvey Specter', 'Harley Quinn' and a random 'mentorship coach'. Imagine interacting with them, and feeling intimately close to the figures you admire through this feature. The increasing sophistication of AI characters and their ability to mimic human-like conversations, only blurs the lines between the real and simulated worlds. It all would become real for people and has real world consequences. AI companies and their developers have an ethical responsibility to ensure transparency about the limitations of AI characters, and ensure that they do not mislead users about their capabilities or simulate emotions that those systems cannot experience. Minors and Elderly then become the vulnerable populations of manipulative conversational AI systems that if unchecked, creates a risk of people living in distorted realities, and alienated worlds that they have created for themselves, or simply put the AI systems manipulated them to be in. Discussion of potential psychological impacts on users, especially minors The psychological implications of excessive and early exposure and introduction to conversational AI are significant, particularly for children. Similar to social media’s impact, these systems could hinder the development of social skills and the ability to build meaningful, real-world relationships. This incorporation will only hurt their prospects of becoming mature and reasonable adults that can navigate the challenges in complex human dynamics. Research suggests that users and particularly children may be vulnerable to the “Empathy Gap” of AI chatbots. Children are likely to treat AI characters as friends and misinterpret their responses due their limited understanding of technologies they are interacting with. Studies have also suggested that interactions with AI systems increase loneliness, sleep disturbances, alcohol consumption, and depression. Also, early introduction to AI systems with limited awareness and in the absence of effective regulatory and support mechanisms would promote unhealthy behaviours that are not only detrimental to their human interactions, but also mental and physical health and also emotional intelligence. This could have second order effects into their careers and real world interactions where they might have unreasonable expectations from humans to do as they say and expect. (something which LLMs are known to do). Ethical implications of AI characters mimicking real or fictional personas AI Characters that mimic real life or fictional personalities raises a whole range of ethical dilemmas that humans truly are not ready to understand the consequences of. Issues related to identity, authenticity, consent, life like conversational mimicking, manipulation need a nuanced understanding in the backdrop of disagreements even on the definitions of what actually is AI? For example, the use of AI to create personas of real people, without their explicit consent can be seen as a gross violation of their privacy. Additionally, actors or creators associated with the original characters might face unintended consequences such as a displaced sense of attachment, love, anger, pain, and distress onto them. Creating real world consequences and unintended second order effects that are hard to mitigate. There is a potential for misrepresentation, and manipulation by AI characters is equally troubling. Technologies like deep fakes already have illustrated the potential for misinformation, reputational damage and legal consequences for those whose AI personas committed or abetted the said manipulation. Additionally it is also true that fictional personas may reinforce unsuitable and inappropriate narratives or behaviors, since which the chat bots were trained on. For example, an AI character that is based on fictional antagonists could reinforce the negative stereotypes or behaviors, when the users interacting with it are not aware about how the technology functions and in absence of required safeguards to protect the interacting users. To address these risks, companies developing these AI characters must themselves adopt widely accepted ethical standards. It is crucial to educate users about the limitations of AI systems and to implement transparent practices that are important to prevent harm. Intellectual Property Strategies for Media Companies in the AI Era The Rise of AI has presented media companies that seek to protect their intellectual property portfolio, while embracing innovation with challenges and opportunities. Traditional IP frameworks need to be reimagined and redesigned to address the unique set of challenges that AI-generated content and AI powered fandom brings to the table. It is crucial to highlight that AI systems have an asymmetrical advantage over the IP right holders whose creative works are often utilized to train theri LLMs. While these LLMs and the companies that train them rapidly ideate, scale, and distribute the fruits of their LLMs, the decision and analysis of the core issues that are central to shaping of future discourse is tied up in court for a significant while, to add onto the stagnant nature of policy making is also the hesitance of govt. to rapidly adopt effective policies and legislations, aiming to avoid completely stifling innovation. The IP owners of those exclusive works face a slower process of defending their rights through courts. They are also un-equipped with appropriate strategies that enforces their rights over their creative works. The incentive structures for AI companies encourages them to quickly develop and scale their products, and enjoy revenue sources from the commercialisation of these LLMs, often leaving IP holders scrambling to even claim rights over their own creative works. Meanwhile, governments often are completely hesitant and do not want to stifle innovation or potential helpful use cases of these systems, yet they do not move beyond the whole whac-a-mole approach to shaping policy discourse around AI and Law. Analysis of Warner Bros. Discovery’ approach to protecting IP Warner Bros. discovery is a media and entertainment company that faces the challenge of protecting its vast and matured IP portfolio in the age of AI. The company’s approach involves a combination of legal strategies, measures, and proactive interaction with AI platforms. The rapid ideation, scaling and implementation advantage of AI companies necessitate for media and creative works copyright holders to incorporate a variety of measures that are of ex ante and post ante nature. A key component to their approach involves monitoring AI platforms, and communities for unauthorized use of Intellectual property in training chatbots, taking legal measures against infringements, negotiating licensing opportunities, and exploring the future world of media entertainment. In the present context , Warner Bros. Discovery has seemed to have devised a proactive strategy to deal with infringements in the digital environment. Thus, mitigating the need for litigation-less enforcement of their claim over their IP rights. Warner Bros Discovery and other media and entertainment companies have a once in a decade opportunity to collaborate with AI platforms to develop tools and technologies that protect their Intellectual Property Portfolio, all the while furthering innovation; curbing misinformation, unauthorised access; dealing with ethical concerns and also enabling AI platforms to put in place appropriate compliance measures that further reduce their liabilities. These collaborations could give headway to develop industry standards and best practices for IP protection at a stage where these technologies are still developing. The unprecedented collaborations could also assist in educating the public about the potential misinformation, consent, unauthorized access and setting user expectations. Media and Entertainment companies could assist AI platforms in explaining the terms of services, privacy policies and user agreements in a story format, with the help of AI characters, this would foster a more healthy and effective approach to dealing with the ethical concerns that have been raised time and again by various stakeholders that are shaping the discourse around AI systems and content creation. Exploration of Licensing Models for AI Character Creation Recent cases, such as the Dow Jones and NYP Holdings v. Perplexity AI and Bartz v. Anthropic , have iterated a significant turning point in the potential relationship between AI companies and owners of creative works, upon which LLMs are trained. In both cases, the owners of exclusive intellectual property have expressed their willingness to potentially collaborate and explore licensing strategies that provide for fair compensation for the use of their works in Training LLMs. This marks a change in approach that IP holders want to exercise to earn an additional source of revenue, and also highlights the fact that they are not reluctant in the usage of their copyrighted content, but are only concerned about the piracy of the content of which they are the sole IP holder. There are various licensing strategies that the AI companies and media entertainment companies could potentially explore as a default. These include exclusive licenses, Non-exclusive licenses, revenue sharing models, and usage based licenses. These models of licensing could be explored, and incorporated, depending on the context for the usage of copyrighted content by the AI companies. The pros and cons of these models are explained hereinafter the form of a mind map: Figure 6: Licensing Models and their types, depicted. Conclusion and Recommendations To conclude, the potential collaborations between IP holders and AI platforms is going to shape how users and owners of creative works view the incentive structures, and what other forms of entertainment are yet to be explored. The 'Tabooisation' of AI systems in the creative work fields will only be detrimental to the media company. Instead, if they choose to embrace a future that is already here and is here to stay, Media companies then would be able develop interactive narratives, personalized experiences, postscript bites, and other new entertainment forms that work in collaboration and not in isolation from AI systems. Here are some mind maps, which reflect some suggestions for balancing copyright protection & innovation in the case of AI use. Figure 7: Suggestions for Balancing Copyright Protection and Innovation in AI, depicted. Figure 8: The Author's Proposed Guidelines for Ethical AI Character Creation and Interaction
- Book Review: Taming Silicon Valley by Gary Marcus
This is a review of " Taming Silicon Valley: How Can We Ensure AI Works for Us ", authored by Dr Gary Marcus. To introduce, Dr Marcus is Emeritus Professor of Psychology and Neural Science at New York University, US. He is a leading voice in the global artificial intelligence industry, especially the United States. One may agree or disagree with his assessments of the Generative AI use cases, and trends. However, his erudite points must be considered to understand how AI trends around the Silicon Valley are documented, and understood, beyond the book’s intrinsic focus on industry & policy issues around artificial intelligence. The book, at its best, gives an opportunity to dive into the introductory problems in the global AI ecosystem, in the Silicon Valley, and in some instances, even beyond. Mapping the Current State of ‘GenAI’ / RoughDraft AI Dr Marcus’s book, "Mapping the Current State of ‘GenAI’ / RoughDraft AI," provides essential examples of how Generative AI (GenAI) solutions appear appealing but have significant reliability and trust issues. The author begins by demonstrating how most Business-to-Consumer (B2C) GenAI ‘solutions’ look appealing, allowing readers to explore basic examples of prompts and AI-generated content to understand the ‘appealing’ element of any B2C GenAI tool, be it in text or visuals. The author compares the ‘Henrietta Incident’, where a misleading point about Dr Marcus led a GenAI tool to produce a plausible but error-riddled output, with an LLM alleging Elon Musk’s ‘death’ by mixing his ownership of Tesla Motors with Tesla driver fatalities. These examples highlight the shaky ground of GenAI tools in terms of reliability and trust, which many technology experts, lawyers, and policy specialists have not focused on, despite the obvious references to these errors. The ‘Chevy Tahoe’ and ‘BOMB’ examples fascinate, showing how GenAI tools consume inputs but don’t understand their outputs. Despite patching interpretive issues, ancillary problems persist. The ‘BOMB’ example demonstrates how masked writing can bypass guardrails, as these tools fail to understand how guardrails can be circumvented. The author responsibly avoids regarding guardrails around LLMs and GenAI as perfect. Many technology lawyers and specialists have misled people about these guardrails’ potential worldwide. The UK Government’s International Scientific Report at the Seoul AI Summit in May 2024 echoed the author’s views, noting the ineffectiveness of existing GenAI guardrails. The book serves as a no-brainer for people to understand the hyped-up expectations associated with GenAI and the consequences associated with it. The author’s approach of not over-explaining or oversimplifying the examples makes the content more accessible and engaging for readers. The Threats Associated with Generative AI The author provides interesting quotations from the Russian Federation Government’s Defence Ministry and Kate Crawford from the AI Now Institute as he delves into offering a breakdown of the 12 biggest immediate threats of Generative AI. Now, one important and underrated area of concern addressed in the sections is medical advice. Apart from deepfakes, the author’s reference to how LLM responses to medical questions were highly variable and inaccurate was necessary to discuss. This reminds us of a trend among influencers to convert their B2C-level content to handle increased consumer/client consulting queries, which could create a misinformed or disinformed engagement loop between the specialist/generalist and potential client. The author impressively refers to the problem of accidental misinformation, pointing out the ‘Garbage-in-Garbage-Out’ problem, which could drive internet traffic, especially in technical domains like STEM. The mention of citation loops of unreal case laws alludes to how Generative AI promotes a vicious and mediocre citation loop for any topic if not dealt with correctly. In addition, the author raises an important concern around defamation risks with Generative AI. The fabrication of content used to prove defamation creates a legal dilemma, as courts may struggle to determine who should be subject to legal recourse. The book is a must-read for all major stakeholders in the Bar and Bench to understand the ‘substandardism’ associated with GenAI and its legal risks. The author’s reference to Donald Rumsfeld’s "known knowns, known unknowns, and unknown unknowns" quote frames the potential risks associated with AI, particularly those we may not yet be aware of. Interestingly, Dr Marcus debunks myths around ‘literal extinction’ and ‘existential risk’, explaining that mere malignant training imparted to ChatGPT-like tools does not give them the ability to develop ‘genuine intentions’. He responsibly points out the risks of half-baked ideas like text-to-action to engineer second and third-order effects out of algorithmic activities enabled by Generative AI, making this book a fantastic explainer of the 12 threats of Generative AI. The Silicon Valley Groupthink and What it Means for India [While the sections covering Silicon Valley in this book do not explicitly mention the Indian AI ecosystem in depth, I have pointed out some normal parallels, which could be relatable to a limited extent.] The author refers to the usual hypocrisies associated with the United States-based Silicon Valley. Throughout the book, Dr Marcus has referred to the works of Soshanna Zuboff and the problem of surveillance capitalism, largely associated with the FAANG companies of North America, notably Google, Meta, and others. He provides a polite yet critical review of the promises held by companies like OpenAI and others in the larger AI research & B2C GenAI segments. The Apple-Facebook differences emphasised by Dr Marcus are intriguing. The author highlights a key point made by Frances Haugen, a former Facebook employee turned whistleblower, about the stark contrast between Apple and Facebook in terms of their business practices and transparency. Haugen argues that Apple, selling tangible products like iPhones, cannot easily deceive the public about their offerings’ essential characteristics. In contrast, Facebook’s highly personalised social network makes it challenging for users to assess the true nature and extent of the platform’s issues. Regarding OpenAI, the author points out how the ‘profits, schmofits’ problem, around high valuations, made companies like OpenAI and Anthropic give up their safety goals around AI building. Even in the name of AI Safety, the regurgitated ‘guardrails’ and measures have not necessarily put forward the goals of true AI Safety that well. This is why building AI Safety Institutes across the world (as well as something in the lines of CERN as recommended by the author) becomes necessary. The author makes a reasonable assessment of the over-hyped & messianic narrative built by Silicon Valley players, highlighting how the loop of overpromise has largely guided the narrative so far. He mentions the "Oh no, China will get to GPT-5" myth spread across quarters in Washington DC, which relates to hyped-up conversations on AI and geopolitics in the Indo-Pacific, India, and the United States. While the author mentions several relatable points around ‘slick video’ marketing and the abstract notion of ‘money gives them immense power’, it reminds me of the discourse around the Indian Digital Competition Bill. In India, the situation gets dire because most of the FAAMG companies in the B2C side have invested their resources in such a way that even if they are not profiting enough in some sectors, they are earning well by selling Indian data and providing relevant technology infrastructure. Dr Marcus points out the intellectual failures of science popularizing movements, like effective accelerationism (e-acc). While e-acc can still be subject to interest and awe, it does not make sense in the long run, with its zero-sum mindset. The author calls out the problems in the larger Valley-based accelerationist movements. To conclude this section, I would recommend going through a sensible response given by the CEO of Honeywell, Vimal Kapur , on how AI tools might affect hardly noticeable domains such as aerospace & energy. I believe the readers might feel more excited to read this incredible book. Remembering the 19th Century and the Insistence to Regulate AI The author's reference to quotes by Tom Wheeler and Madeleine Albright reminds me of a quote from former UK Prime Minister, Tony Blair , on a lighter note: “My thesis about modern politics is that the key political challenge today is the technological revolution, the 21st century equivalent of the 19th century Industrial Revolution. And politics has been slow to catch up.” While Blair's reference is largely political, the two quotes by Wheeler and Madeleine relate to the interesting commonalities between the 19th and 21st centuries. The author provides a solid basis as to why copyright laws are important when data scraping techniques in the GenAI ecosystem do not respect the autonomy & copy-rights of the authors whose content is consumed & grasped. The reference to quotes from Ed Newton-Rex and Pete Dietert on the GenAI-copyright issue highlights the ethical and legal complexities surrounding the use of creative works in training generative AI models. Dr Marcus emphasizes the urgent need for a more nuanced and ethical approach to AI development, particularly in the realm of creative industries. The author uses these examples to underscore a critical point: the current practices of many AI companies in harvesting and using creative works without proper permission or compensation are ethically questionable and potentially exploitative. Pete Dietert's stark warning about "digital replicants" amplifies the urgency of addressing these issues, extending the conversation beyond economic considerations to fundamental human rights, as recognised in the UNESCO Recommendation on the Ethics of AI of 2021 . Dr Marcus points out how the 'Data & Trust Alliance' webpage features appealing privacy and data protection-related legal buzzwords, but the details help in shielding companies more than protecting consumers. Such attempts of subversions are being tried in Western Europe, Northern America, and even parts of the Indo-Pacific Region, including India. The author focuses on algorithmic transparency & source transparency among the list of demands people should make. He refers to the larger black box problem as the core basis to legally justify why interpretability measures matter. With respect to consumer law and human rights, AI interpretability (Explainable AI) becomes necessary to have a gestation phase to see if there is any interpretability of the activities regularly visible in AI systems at a pre-launch stage. On source transparency, the author points out the role of content provenance (labelling) in enabling distinguishability between human-created content and synthetic content, so that the tendency to create "counterfeit people" is prevented and discouraged. The author refers to the problem of anthropomorphism, where many AI systems create a counterfeit perception among human beings and, via impersonation, could potentially downgrade their cognitive abilities. Among the eight suggestions made by Dr Marcus on how people can make a difference in bettering AI governance avenues, the author makes a reasonable point that voluntary guidelines must be negotiated with major technology companies. In the case of India, there have been some self-regulatory attempts, like an AI Advisory (non-binding) in March 2024, but more consistent efforts may be implemented, starting with voluntary guidelines, with sector-specific & sector-neutral priorities. Conclusion Overall, Dr Gary Marcus has written an excellent prologue to truly ‘tame’ the Silicon Valley in the simplest way possible for anyone who is not aware of technical & legal issues around Generative AI. As recommended, this book also gives a slight glance at improving some understanding around digital competition policy measures, and the effective use of consumer law frameworks, where competition policy remains ineffective. The book is not necessarily a detailed documentation on the state of AI Hype. However, the examples, and references mentioned in the book are enough for researchers in law, economics and policy to trace out the problems associated with the American & Global AI ecosystems.
- The 'Algorithmic' Sophistry of High Frequency Trading in India's Derivatives Market
The author is a Research Intern at the Indian Society of Artificial Intelligence and Law as October 2024. A recent study conducted by the market regulator of the country, the Securities and Exchange Board of India (SEBI), shed light on tectonic disparities in the market space concerning equity derivatives. Per the study, the utilisation of algorithm trading for the purposes of proprietary trading and that of foreign funds resulted in gross profits that totalled an amount of ₹588.4 billion ($7 billion) from having traded in the equity derivates of Indian markets in the Financial Year that ended on March’24. [1] However, it was noted that in a disparate stark contrast, several individual traders faced monumental consequential losses. The study further detailed that almost 93% of the individual traders had suffered losses in the Equity Futures and Options (F&O) Segment, in the preceding three years, that is, from the financial years of 2022 to 2024 ; the aggregate losses totalling to an amount exceeding ₹1.8 lakh crore . Notably, in the immediately preceding Financial Year [2023 – March 2024] alone, the net losses incurred among individual traders approximated an amount of ₹75,000 crore. The findings of SEBI underscore the challenges faced by individual traders when the former is having to compete against a more technologically furthered, well-funded entity in the market space of derivatives. The insight clearly contends that institutional entities that have inculcated algo-trading strategies have a clear competitive edge over those who lack the former, i.e., individual traders. Understanding the Intricacies of Algorithm Trading High-Frequency Trading refers to the over-arching aspect of algorithm trading which is latency sensitive and done through the medium of an automated platform, that essentially focuses on trading. The same is facilitated through advanced computational systems that are technically capable of executing large orders ate a more efficient speed in order to achieve optimal prices at a level humans cannot match. The dominancy of algorithms in the domain of the global landscape of financial markets, have had an exponential growth the past decade. High-frequency trading (HFT) algorithms, aim at the execution of trades within fractions of a second. This high-speed computational system places institutional investors at a more profitable and higher pedestal than individual traders, who typically place reliance on manual trading strategies and consequently have an evident lack access in sophisticated analytics and real-time processing trading systems. Furthermore, HFT allows traders to trade a larger amount of shares frequently, by processing differences within marginal prices in split of a second, thereafter ensuring accuracy in the executing of a trade and enhancement of market liquidity. The premise is also paralleled in the Indian equity derivatives market with HFT firms reaping substantial profits. The study conducted by the India’s market regulator evidently sheds light on the comparable gains and losses among institutional traders and individual traders respectively. The insight expounds upon the sophistries on the competitive dynamics of the country’s derivatives market, and its superficial regulation over manual trading and computational trading. The Competitive Landscape of the Derivatives Market: The Odds Stacked Against Individual Traders The study revealed a disadvantageous plight of retail traders, with every nine out of ten retail traders having incurred losses over the preceding three FY. This thereafter raises the contentious debate surrounding viability of individual traders and market dynamics in the landscape of derivatives market. The lack of the requisite support and resources to individual traders would make the sustainability of the former difficult, especially with the backdrop of a growing trend in algorithm trading. HFT has been subjected to critique by several professionals, with the latter holding the former in contempt for unbalancing the playing field of derivatives market. Other disadvantageous impediments brought firth by such trading mechanism include: Market Noise Price volatility Strengthening of the mechanism of surveillance Heavier imposition of costs Market manipulation and consequent disruption in the structure of capital markets The Need to Regulate the Technological 'Arms Race' in Trading Given the evident differences in mechanisms of trading, there arises a pressing need for improving tools of trading and ensuring easier access to related educational resources for individual investors. SEBI, the capital market regulator of India, has the prerogative obligation to regulate such disparities. In 2016, a discussion paper was released by the former that attempted to address the various issues relating to HFT mechanisms. The same was done with the premise of instituting an environment of equitable and fair marketspace for every stakeholder therein involved. SEBI proposed the institution of a “Co-Location facility” done on a shared-basis that do not allow the installation of individual servers. This proposed move aims to potentially reduce the latency of having access to the trading system, and attempting to provide a tick-by-tick feed of data, that would be given free of cost to all trading stakeholders. SEBI further proposed a review mechanism over the requirements of trading with respect to usage of algo-trading softwares. The same is furthered by mandating stock exchanges for strengthening the regulatory framework of algo-trading, and consequently lead to the institution of a simulated environment of market for an initial test of the software, prior to its real-time application. [2] To add, SEBI has also undertaken a slew of measures to regulate the algo-trading and HFT. This includes [3] : Minimum time of rest for orders of stock Institution of a mechanism of maximum-order-message to measurements of trade, ratio Randomisation of the orders in stock and a review system on the tick-by-tick feed of data Institution of congestion charges to reduce the load on the market Thus, despite the rather unregulated stride on HFT in India, SEBI in vide has an overarching authority over the same through the provisions of SEBI Act, 1992. However, the same is prevailing in a rudimentary existence and thereafter, continues to usher in an age of unhealthy competitiveness among the traders in a capital market. References [1] Newsdesk, High Speed Traders reap $7bn profit from India’s options market, https://www.thenews.com.pk/print/1233452-high-speed-traders-reap-7bn-profit-from-india-s-options-market (last visited on 6 Oct, 2024). [2] Amit K Kashyap, et. al., Legality and issues relating to HFT in India, Taxmann, https://www.taxmann.com/research/company-and-sebi/top-story/105010000000017103/legality-and-issues-related-to-high-frequency-trading-in-india-experts-opinion (last visited on 6 Oct, 2024). [3] Id.











