top of page

Ready to check your search results? Well, you can also find our industry-conscious insights on law, technology & policy at indopacific.app. Try today and give it a go.

Search Results

116 results found

  • Examining the Perplexity Position on Antitrust Issues associated with Google

    Recently, Aravind Srinivas, the CEO of Perplexity.AI , had announced that his company was asked to testify at the United States Congress on the recent antitrust issues clearly raised by the US FTC against Google. Now, Perplexity AI's position on the Google antitrust case reveals surprising parallels with the Trump Administration's April 3, 2025 AI memorandums, though significant tensions exist in how their approach to intellectual property protection and competition would impact Indo-Pacific digital sovereignty. Let's understand this further in this brief input. Convergence on Contract-Based Remedies vs. Structural Breakups Perplexity's antitrust testimony advocating against Google's breakup while supporting contractual reforms aligns notably with the Trump Administration's approach to vendor relationships in M-25-22. Both emphasize: Anti-vendor lock-in provisions over structural remedies: Perplexity argues "the remedy isn't breakup. Just let people choose," mirroring M-25-22 's focus on "clear data and model portability practices, clear licensing terms, and pricing transparency" to prevent vendor dependencies rather than breaking up technology ecosystems. Contractual controls on data usage: Perplexity criticizes Google's control mechanisms that "threatened OEMs and carriers with wide ranging penalties," similar to how M-25-22 requires contracts to "permanently prohibit the use of non-public inputted agency data...to further train publicly or commercially available AI algorithms...absent explicit agency consent". India's Google Antitrust Actions The Competition Commission of India (CCI) has pursued remarkably similar concerns about Google's market dominance through a different legal framework: Recent Android TV Settlement On April 22, 2025, Google settled a nearly four-year-old case with the CCI concerning Android Smart TV anti-competitive practices by: Developing a "New India Agreement" providing standalone licenses for Play Store and Play Services without bundling requirements Paying ₹20.24 crore (approximately $2.4 million) as settlement Waiving requirements for manufacturers to have valid Android Compatibility Commitments for devices that don't include Google apps This settlement was approved through a majority order, with one member, Anil Agarwal, dissenting—arguing the settlement "does not eliminate existing arrangements under TADA which have been prima facie found to be contravening the provisions of the Act". Divergent Interests on IP and Competition Despite these parallels, critical differences emerge in IP protection and competitive priorities: IP Protection Stance: The Trump memorandums take a hardline position on IP protection, with M-25-22 requiring agencies to "clearly delineate the respective ownership and IP rights" and explicitly reject weakened IP protections for AI training. Perplexity's testimony, while not explicitly addressing IP law, focuses on competitive access rather than IP protection. American-First vs. Open Choice: M-25-21 and M-25-22 repeatedly emphasize "maximizing the use of AI products and services that are developed and produced in the United States", while Perplexity frames its position more neutrally as enabling "consumer choice" regardless of national origin. Relationship with Big Tech: Where Perplexity explicitly supports Google retaining Chrome and praises Google for "open-sourcing Chromium", the Trump memorandums take a more cautious stance toward large tech vendors, focusing on preventing "significant and costly dependencies on a single vendor". Conclusion: Strategic Implications for Indo-Pacific Competition Policy The intersection of these three approaches—Trump's AI policy framework, Perplexity's antitrust position, and India's CCI enforcement—creates distinct challenges for Indo-Pacific nations: Contrasting Remedy Models: While both Trump's memorandums and India's CCI focus on contractual remedies over breakups, they differ in implementation. The CCI's "New India Agreement" created an alternative contractual framework that Google could offer alongside existing agreements, while Trump's approach requires more explicit prohibitions on using government data for commercial AI training. Enforcement Mechanisms: The CCI settlement includes specific compliance reporting requirements and a five-year monitoring period, paralleling Trump's memorandum requiring agencies to " ensure contracts permanently prohibit " improper data usage and establish ongoing monitoring. Dissent and Accountability: The CCI's published dissent by Anil Agarwal highlights concerns about whether contractual remedies without eliminating existing arrangements would be sufficient—a critique that could equally apply to Perplexity's position that consumer choice alone can address Google's dominance.

  • The Position of Second Trump Admin on AI and Intellectual Property Laws: The April 2025 Memorandums

    The Trump Administration recently released two significant memorandums—M-25-22 ("Driving Efficient Acquisition of Artificial Intelligence in Government") and M-25-21 ("Accelerating Federal Use of AI through Innovation, Governance, and Public Trust")—providing guidelines on federal use and procurement of artificial intelligence (AI). This policy brief analyzes these memorandums specifically regarding intellectual property (IP) protections in AI development, contrasting them with recent advocacy by certain tech companies for weakened copyright protections. From an Indo-Pacific perspective, these memorandums signal a continued U.S. commitment to IP protection while simultaneously promoting AI innovation in global technology competition, particularly with China. This position has significant implications for Indo-Pacific nations navigating their own AI governance frameworks and IP protection regimes, especially as they position themselves within the U.S.-China technological rivalry. The Call to Undo TRIPS and IP Laws using Fair Use Justifications Several prominent tech leaders and companies have actively lobbied for relaxed IP protections for AI development. OpenAI's March 13, 2025, submission to the White House Office of Science and Technology Policy explicitly called for fundamental changes to U.S. copyright law that would allow AI companies to use copyrighted works without permission or compensation to rightsholders. OpenAI framed this position as crucial for national security and American competitiveness, claiming: "Applying the fair use doctrine to AI is not only a matter of American competitiveness — it's a matter of national security... If [China's] developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over. America loses, as does the success of democratic AI." The Trump Administration's Position The Trump Administration's approach, as articulated in the two recent memorandums, takes a more balanced stance that respects existing IP frameworks while promoting AI innovation. Rather than endorsing OpenAI's call for unrestricted access to copyrighted materials, the memorandums emphasise: The importance of protecting intellectual property rights in AI development and procurement The need for clear delineation of data ownership and IP rights in government contracts Restrictions on unauthorised use of government data for training commercial AI systems Preserving Leadership While Preventing Regulatory Capture? Figure 1: Trump 2.0 Administration's AI Gov. approach compared with that of Biden Admin 1.0 While the Trump administration's AI memorandums mark a regulatory pivot, they maintain continuity with Biden-era policies by equally prioritising American global AI leadership and national security interests. Both administrations recognize AI's strategic importance, with M-25-21 explicitly stating agencies must adopt a "forward-leaning and pro-innovation approach" while maintaining "strong safeguards for civil rights, civil liberties, and privacy"—language echoing Biden's objectives while avoiding duplication of efforts across agencies. Where Trump's approach distinctly diverges is in addressing the potential for regulatory capture inherent in Biden's framework. While Biden's comprehensive regulatory structure created complex compliance requirements that well-resourced tech giants could navigate—potentially entrenching their market dominance—Trump's memorandums streamline these requirements while maintaining critical intellectual property protections. M-25-22 specifically addresses this by requiring agencies to "have appropriate processes for addressing use of government data and include appropriate contractual terms that clearly delineate the respective ownership and IP rights", preventing large AI companies from exploiting public resources without accountability. This balanced approach illustrates how Trump's administration avoids regulatory capture while preserving common national objectives. Rather than allowing major tech companies to use government data to improve their commercial offerings—as OpenAI and others publicly advocated for—Trump's memorandums explicitly prohibit "the use of non-public inputted agency data and outputted results to further train publicly or commercially available AI algorithms...absent explicit agency consent". This stance protects both government interests and prevents large corporations from monopolising government-sourced data benefits, creating a more level playing field that maintains American leadership without favoring established tech giants. Analysis of the Memorandums' IP Provisions M-25-22: Protecting IP in Government AI Procurement M-25-22 specifically addresses IP rights in section 3(e), emphasizing that agencies must "have appropriate processes for addressing use of government data and include appropriate contractual terms that clearly delineate the respective ownership and IP rights of the government and the contractor". The memorandum notes that "careful consideration of respective IP licensing rights is even more important when an agency procures an AI system or service". Key IP protection provisions include: Agencies must update acquisition procedures within 270 days to include appropriate contract terms for IP rights, as outlined in Section 3(b)(iii). Section 3(e) mandates processes to delineate ownership and IP rights, especially when agency data is used to train, fine-tune, or develop AI, to prevent vendor lock-in. Contracts must prohibit vendors from using nonpublic agency data to further train publicly or commercially available AI algorithms without explicit consent, consistent with applicable law. Solicitation and contract terms must address IP rights and data usage, ensuring transparency and accessibility (Sections 4(c)(iii) and 4(d)(iii)). At contract closeout, agencies must ensure ongoing rights and access to data or derived products, maintaining usability and format (Section 4(f)(i)). These provisions reflect a strong emphasis on protecting IP rights in government AI acquisitions, ensuring that proprietary technologies and government data are safeguarded. M-25-21: Balancing Innovation and IP Protection This memorandum focuses on promoting responsible AI adoption, with specific mentions of IP in the context of sharing AI code and models: Section 2(b)(i) requires agencies to share custom-developed AI code across the federal government, except when restricted by law or regulation, including patent or IP law, Export Asset Regulations, International Traffic in Arms Regulations, or classified information laws. Agencies are encouraged to release AI code as open source where practicable, but this is subject to IP law restrictions, balancing transparency with proprietary rights. The memorandum also directs agencies to: "Take steps to ensure that their contracts retain sufficient rights to Federal Government data and retain any improvements to that data" Implement "contractual terms that prevent vendor lock-in and also protect Federal information used by vendors in the development and operation of AI products and services" Ensure contracts "protect such data from unauthorised disclosure or use, and from being used to train or improve the functionality of the vendor's commercial offerings without express permission from the agency" Assessment and Recommendations Key Findings Rejection of Tech Industry's IP Position : The Trump Administration memorandums implicitly reject OpenAI and other tech leaders' calls for weakened IP protections for AI training, instead emphasizing the importance of clear IP rights delineation and protection. Balanced Approach to Competition with China : While acknowledging the competitive challenge from China, the memorandums propose addressing it through innovation within existing IP frameworks rather than by dismantling those frameworks. Emphasis on Consent and Control : The memorandums consistently prioritize consent and control over data usage, including requiring explicit permission for using government data to train commercial AI systems. Recommendations for Indo-Pacific Stakeholders Develop Regionally Appropriate IP Frameworks : Indo-Pacific nations should consider developing AI governance frameworks that balance innovation with appropriate IP protections, using the U.S. memorandums as potential reference points. Strengthen Regional Cooperation on IP Protection : Given the transnational nature of both AI development and IP challenges, regional cooperation mechanisms should be strengthened to develop consistent approaches to IP protection in AI. Engage with Major Technology Companies : Indo-Pacific governments should proactively engage with technology companies to develop workable solutions that respect creator rights while enabling responsible AI innovation. Explore Consent-Based Models : Following the U.S. memorandums' emphasis on consent, Indo-Pacific nations could explore frameworks that facilitate IP licensing for AI training while ensuring appropriate compensation and control for creators. Conclusion The Trump Administration's recent AI memorandums signal a rejection of calls by OpenAI and other tech leaders to weaken IP protections for AI development. Instead, they chart a course that respects existing IP frameworks while promoting AI innovation as part of the strategic competition with China. For Indo-Pacific nations navigating their own approaches to AI governance, these memorandums offer valuable insights into balancing innovation, competition, and intellectual property rights. As the region continues to develop its technology sectors and position itself within the global AI landscape, these policy directions will likely influence regional approaches to the intersection of intellectual property and artificial intelligence.

  • US Government Accountability Office’s Testimony on Data Quality and AI, Explained

    The Government Accountability Office (GAO) testimony before the Joint Economic Committee highlights a critical challenge facing the federal government: how to leverage artificial intelligence to combat fraud and improper payments while ensuring data quality and workforce readiness. This analysis examines the intricate relationship between data quality, skilled personnel, and AI implementation in government settings, drawing insights from the GAO's extensive research and recommendations. The Magnitude of the Problem: Fraud and Improper Payments The federal government faces staggering financial losses due to fraud and improper payments. According to GAO estimates, fraud costs taxpayers between $233 billion and $521 billion annually, based on fiscal year 2018-2022 data1. Since fiscal year 2003, cumulative improper payment estimates by executive branch agencies have totaled approximately $2.8 trillion. The scale of this problem demonstrates why innovative solutions like AI are being considered. Improper payments—defined as payments that should not have been made or were made in incorrect amounts—are concentrated in five key program areas that account for about 75% ($121 billion) of all estimated improper payments reported for fiscal year 2024. These figures underscore the urgent need for more effective detection and prevention mechanisms. Data Quality + AI Effectiveness? At the heart of the GAO testimony is a fundamental principle in artificial intelligence implementation: the quality of data directly determines the effectiveness of AI systems. This relationship is encapsulated in the common phrase among AI developers: "garbage in, garbage out". Data Quality Requirements for AI Success For AI systems targeting fraud and improper payments, several data quality factors emerge as essential: Labeled Training Data : Machine learning systems require properly categorized historical payment data (accurate, improper without fraud, or fraudulent)1. Incorrect labelling leads to unreliable results. Data Integrity : The testimony highlights concerns about "data poisoning," a process whereby malicious actors alter training data to compromise AI system behaviour. Ground Truth Data : The GAO emphasises that "solid, reliable 'ground truth' data" is necessary for AI tools to function effectively1. Without this foundation, even sophisticated algorithms cannot produce trustworthy results. Data Standardisation : Government data varies significantly in quality and standards across agencies, creating challenges for unified AI applications. Consequences of Poor Data Quality The testimony outlines specific risks when AI systems operate with inadequate data quality: False Negatives : Systems may fail to detect actual improper payments, allowing fraud to continue undetected. False Positives : Legitimate payments might be erroneously flagged as improper, potentially delaying or denying rightful payments to beneficiaries and small businesses. Efficiency Loss : If errors become too numerous, agencies could spend more time identifying AI's mistakes than they save compared to traditional detection methods. Workforce Challenges in AI Implementation The second critical pillar for successful AI deployment is having personnel with appropriate skills and expertise. The GAO identifies several significant workforce challenges: Expertise Gap : There is a "severe shortage of federal staff with AI expertise" despite the growing need for these skills. Compensation Issues : Current federal digital staff compensation is not competitive with private sector offerings, making recruitment difficult. Hiring Process : Many qualified candidates "may not be willing to endure the lengthy federal hiring process," creating recruitment bottlenecks. Advanced Skills Requirement : Agencies particularly need staff with advanced skills, suggesting a focus on graduate-level education for the AI workforce. The testimony notes that these workforce challenges have been persistent, with GAO having "identified mission-critical gaps in federal workforce skills and expertise in science, technology, engineering, and mathematics" for decades. Governance Frameworks for Responsible AI Use To address both data quality and workforce challenges, the GAO has developed a comprehensive AI Accountability Framework with 31 key practices. This framework emphasises: Organisational Governance : Entities should define clear goals, roles, and responsibilities while demonstrating values that foster trust. Data Governance : Five key practices help entities use appropriate data for developing AI models, including documenting data sources and ensuring appropriate data incorporation into models. Human Oversight : Maintaining a "human in the loop" is vital for ensuring oversight of data and processes. AI should complement, not replace, human judgment in fraud detection. Stakeholder Engagement : Organizations should "engage stakeholders with diverse perspectives to mitigate risks" in AI implementation. Strategic Recommendations for Improvement The GAO testimony offers several concrete recommendations to enhance AI capabilities for fraud prevention: Legislative Action : Congress should consider making permanent the Social Security Administration's authority to share full death data with Treasury's Do Not Pay system. Analytics Center : Establishing a permanent analytics center of excellence focused on fraud and improper payments could centralize AI expertise and tools. Digital Services Academy : Creating an academy similar to military academies could help train future workers with the necessary AI skills. Data Access Improvement : Agencies need better access to data for fraud indicator analysis, as identified in GAO's 2023 survey on fraud risk management. Balancing Technology with Human Expertise A recurring theme throughout the testimony is the need to balance technological capabilities with human expertise. The GAO notes that "AI does not replace the professional judgment of experienced staff in detecting potentially fraudulent activities". While AI can process large volumes of data rapidly, human intelligence remains essential for: Contextual Understanding : Interpreting patterns within their proper context Judgment Application : Determining appropriate actions based on AI findings Technology Selection : Choosing the right tools for specific fraud detection scenarios Ethical Oversight : Ensuring AI systems operate within ethical and legal boundaries Conclusion The GAO testimony provides a framework for understanding the dual foundations necessary for successful AI implementation in government fraud prevention: high-quality data and a skilled workforce. Without addressing both elements, AI initiatives risk falling short of their potential or, worse, creating new problems through false results. As agencies continue exploring AI applications for fraud detection and prevention, the principles outlined in this testimony offer critical guidance for ensuring that these technologies serve their intended purpose while maintaining public trust in government systems.

  • Indo-Pacific Research Principles on Use of Large Reasoning Models & 2 Years of IndoPacific.App

    The firm is delighted to first announce the launch of Indo Pacific Research Principles on Large Reasoning Models, and also announces two successful years of IndoPacific App, our legal and policy literature archive since 2023. The first section covers the firm's reasoning to introduce these principles on large reasoning models. The Research Principles: Their Purpose Now, Large Reasoning Models as developed by AI companies across the globe, whose examples include Deeper Search & Deep Research by XAI (Grok), Deep Research by Perplexity, Gemini's Deep Research and even OpenAI's own Deep Reasoning tool, are supposed to mimic reasoning abilities of human beings. The development of LRMs emerged from the recognition that standard LLMs often struggle with complex reasoning tasks despite their impressive language generation capabilities. Researchers observed, or say, supposed , that prompting LLMs to "think step by step" or to break down problems into smaller components often improved performance on mathematical, logical, and algorithmic challenges. Models like DeepSeek R1, Claude, and GPT-4 are frequently cited examples that incorporate reasoning-focused architectures or training methodologies. These models are trained to produce intermediate steps that suppose to 'resemble' human reasoning processes before arriving at final answers. These models, which include systems like Claude 3.7, DeepSeek R1, GPT-4, and others, claim to exhibit reasoning capabilities that mimic human thought processes, often displaying their work through "reasoning traces" or step-by-step explanations of their thought processes. However, recent research has begun to question these claims and identify significant limitations in how these models actually reason. While LRMs have shown some performance on certain benchmarks, researchers have found substantial evidence suggesting that what appears to be reasoning may actually be sophisticated pattern matching rather than genuine logical processing. The Anthropomorphisation Trap A critical issue in evaluating LRMs is what researchers call the " anthropomorphisation trap " - the tendency to interpret model outputs as reflecting human-like reasoning processes simply because they superficially resemble human thought patterns. The inclusion of phrases like "hmm..," "aha..," "let me think step by step..," may create the impression of deliberative thinking, but these are more likely stylistic imitations of human reasoning patterns present in training data rather than evidence of actual reasoning. This trap is particularly concerning because it can lead researchers and users to overestimate the models' capabilities. When LRMs produce human-like reasoning traces that appear thoughtful and deliberate, we may incorrectly attribute sophisticated reasoning abilities to them that don't actually exist. Here is a table that gives you an overview of the limitations associated with large reasoning models. Reasoning Limitation in Large Reasoning Models Description Lack of True Understanding LRMs operate by predicting the next token based on patterns they've learned during training, but they fundamentally lack a deep understanding of the environment and concepts they discuss. This limitation becomes apparent in complex reasoning tasks that demand true comprehension rather than pattern recognition. Contextual and Planning Limitations Although modern language models excel at grasping short contexts, they often struggle to maintain coherence over extended conversations or larger text segments. This can result in reasoning errors when the model must connect information from various parts of a dialogue or text. Additionally, LRMs frequently demonstrate an inability to perform effective planning for multi-step problems. Deductive vs. Inductive Reasoning Research indicates that LRMs particularly struggle with deductive reasoning, which requires deriving specific conclusions from general principles with a high degree of certainty and logical consistency. Their probabilistic nature makes achieving true deductive closure difficult, creating significant limitations for applications requiring absolute certainty. Even in the paper co-authored by Prof. Subbarao Kambhampati, entitled, " A Systematic Evaluation of the Planning and Scheduling Abilities of the Reasoning Model o1 " directly addresses critical themes from our earlier discussion about Large Reasoning Models (LRMs). Here are some quotes from this paper: While o1-mini achieves 68% accuracy on IPC domains compared to GPT-4's 42%, its traces show non-monotonic plan construction patterns inconsistent with human problem-solving [...] At equivalent price points, iterative LLM refinement matches o1's performance, questioning the need for specialized LRM architectures. [...] Vendor claims about LRM capabilities appear disconnected from measurable reasoning improvements. The Indo-Pacific Research Principles on Use of Large Reasoning Models Based on the evidence we have collected, and the insights received, we have proposed the following research principles on use of large reasoning models by Indic Pacific Legal Research. Principle 1: Emphasise Formal Verification Train LRMs to produce verifiable reasoning traces, like A* dynamics or SoS, for rigorous evaluation. Principle 2: Be Cautious with Intermediate Traces Recognise traces may be misleading; do not rely solely on them for trust or understanding. Principle 3: Avoid Anthropomorphisation Focus on functional reasoning, not human-like traces, to prevent false confidence. Principle 4: Evaluate Both Process and Outcome Assess both final answer accuracy and reasoning process validity in benchmarks. Principle 5: Transparency in Training Data Be clear about training data, especially human-like traces, to understand model behaviour. Principle 6: Modular Design Use modular components for flexibility in reasoning structures and strategies. Principle 7: Diverse Reasoning Structures Experiment with chains, trees, graphs for task suitability, balancing cost and effectiveness. Principle 8: Operator-Based Reasoning Implement operators (generate, refine, prune) to manipulate and refine reasoning processes. Principle 9: Balanced Training Use SFT and RL in two-phase training for foundation and refinement. Principle 10: Process-Based Evaluation Evaluate the entire reasoning process for correctness and feedback, not just outcomes. Principle 11: Integration with Symbolic AI Combine LRMs with constraint solvers or planning algorithms for enhanced reasoning. Principle 12: Interactive Reasoning Design LRMs for environmental interaction, using feedback to refine reasoning in sequential tasks. Please note that all principles are purely consultative and constitute no binding value on the members of Indic Pacific Legal Research. We also permit the use of these principles, provided that we are well-cited and referenced, for strict non-commercial use. IndoPacific.App Celebrates its Glorious 2 Years The IndoPacific App, launched under Abhivardhan's leadership in 2023 was a systemic reform undergone at Indic Pacific Legal Research to document our research publications, and contributions better. In our partnership with the Indian Society of Artificial Intelligence and Law, ISAIL.IN 's publications, and documentations are also registered at the IndoPacific App, under the AiStandard.io Alliance, and even otherwise as well. As this archive of legal (mostly) and policy literature completes its 2 years of existence under Indic Pacific's VLiGTA research & innovation division, we are glad to put some statistics clear for everyone's purview, which is updated as of April 12, 2025, and verified by manual means, after our use of Generative AI tools. This means that the stats is double-checked: We host publications and documentations of exactly 238 original authors (1 error removed). Our founder, Mr Abhivardhan's own publications constitute around 10% (approx.) The number of publications on IndoPacific App stand at 85, however, the number of chapters or contributory sections or articles, if we add numbers within research collections, then the number of research contributions stand at 304 unique contributions, which is a historic figure. Now, if we attribute these 304 unique contributions to each author (which are in the form of chapters to a collection of research or handbook, or a report, or a brief, for instance) - then the number of individual author credits will cross 300 as per our approximate estimates. This means something simple, honest and obvious. The IndoPacific.App , started by our Founder, Abhivardhan, is the biggest technology law archive of mostly Indian authors, with around 238 original authors documented in this archive, and 304 unique contributions (published) featured. There is no law firm, consultancy, think tank or institution with such a huge technology law archive, with independent support, and we are proud to have achieved this feat in the 5 years span of existence of both Indic Pacific Legal Research, and the Indian Society of Artificial Intelligence and Law. Thank you for becoming a part of this research community either through Indic Pacific Legal Research, or the Indian Society of Artificial Intelligence and Law. It is our honour and duty to safeguard this archive for all, which is 99% (except the handbooks) free. So, don't wait, go and download some amazing works from the IndoPacific.app , today.

  • Crafting the Future: Gratitude to DNLU Jabalpur and the Pivotal Role of aiact.in in Shaping AI Governance

    At Indic Pacific Legal Research LLP, we are thrilled to extend our heartfelt gratitude to Dharmashastra National Law University (DNLU), Jabalpur, for their remarkable initiative in hosting a Legislative Drafting Competition centered on the "Artificial Intelligence (Development and Regulation) Act." It’s a moment of pride and affirmation for us to witness a leading Indian law school engage with the critical intersection of AI and law—a space we’ve been passionately shaping since our inception in 2019. DNLU’s efforts to nurture innovative legal thinking align beautifully with our mission to foster responsible AI development and governance in India, and we wish them resounding success in this endeavour. This moment also shines a light on the journey of aiact.in —our flagship project, the Artificial Intelligence (Development & Regulation) Act, 2023, spearheaded by our founder, Abhivardhan. Launched in November 2023 with no grand expectations, this privately proposed AI bill has grown into a pivotal resource, inspiring conversations like the one at DNLU. What began as a vision to craft an India-centric framework for AI regulation has, in just over a year, garnered appreciation from developers, judges, and technologists alike. Its strength lies in its feedback-driven approach—offering a practical, adaptable blueprint that stakeholders can refine and build upon. Seeing it spark a legislative drafting competition at DNLU is a testament to its relevance and potential to influence India’s AI policy landscape. For us at Indic Pacific, aiact.in  is more than a draft—it’s a cornerstone of our commitment to pioneering technology law solutions with an Indo-Pacific lens. Despite early skepticism (including a dismissive encounter with a law firm that overlooked its originality), this initiative has proven its worth by amplifying Indian perspectives in a global discourse often dominated by Western frameworks. It embodies our ethos of salience, persistence, and adaptivity, driving dialogue among startups, MSMEs, and policymakers. Through our Research & Innovation Division, VLiGTA®, we’ve ensured aiact.in  remains a dynamic tool—evolving with insights from industry and academia, as evidenced by its recognition in DNLU’s competition. We’re deeply grateful to DNLU Jabalpur for not only embracing this theme but also acknowledging our efforts in shaping AI governance. Your competition is a powerful step toward building a future where AI is harnessed responsibly, and we at Indic Pacific are honoured to be part of this narrative. Here’s to continued collaboration and innovation—may DNLU’s students and faculty inspire the next wave of legal brilliance!

  • The Version 5 of Artificial Intelligence (Development & Regulation) Act, 2023 is Launched

    Indic Pacific Legal Research, under the stewardship of Abhivardhan, proudly presents Version 5.0 of the Draft Artificial Intelligence (Development & Regulation) Act, 2023 ( AIACT.in ). This iteration introduces pivotal amendments, with Section 23 leading as a freshly revised cornerstone, alongside updates to Section 7, Section 9, Section 13, Section 20A, and the newly enacted Section 24-A. These changes underscore Indic Pacific’s commitment to ethical, transparent, and inclusive AI regulation in India. Section 23: Content Provenance and Identification (Key Highlight) Indic Pacific has reimagined Section 23 to set a gold standard for AI-generated content. The amendment mandates watermarking with detailed metadata—covering scraping methods, data origins, and licensing—while enforcing ethical data practices limited to consented or public sources. Developers of high-impact systems must secure insurance up to ₹50 crores, ensuring accountability and curbing misuse. This positions Indic Pacific at the forefront of content integrity. Section 7: Strengthened Risk Classification Indic Pacific refines AI risk tiers—Narrow, Medium, High, and Unintended—banning the latter and intensifying scrutiny on High-Risk systems. This amendment safeguards against unpredictable technologies, reinforcing public trust and security. Section 9: Oversight in Strategic Sectors High-risk AI in designated strategic sectors now falls under tailored regulations, with this Act prevailing over conflicting rules. Indic Pacific ensures robust governance where it matters most. Section 13: Enhanced National AI Ethics Code The updated ethics code prioritizes transparency, fairness, and human oversight, offering a clear roadmap for responsible AI. Indic Pacific champions ethical innovation with this refresh. Section 20A: Transparency in Public AI Initiatives Government and partnership AI projects must now disclose objectives, funding, and algorithms, backed by audits and public explanations. Indic Pacific drives accountability in the public sphere. Section 24-A: Right to AI Literacy Introduced A landmark addition, this section grants every individual access to AI literacy—covering concepts, impacts, and recourse options. Indic Pacific empowers citizens for an AI-driven future. These amendments, with Section 23 as the flagship, exemplify Indic Pacific’s vision for a balanced, responsible AI ecosystem. Please give your feedback on this version of the bill at vligta@indicpacific.com .

  • Decoding the AI Competency Triad for Public Officials: A Deep Dive into India’s Strategic Framework

    The Ministry of Electronics and Information Technology (MeitY) recently launched its AI Competency Framework, aiming to equip public officials with the skills needed to responsibly integrate artificial intelligence into governance processes. Our latest report, "Decoding the AI Competency Triad for Public Officials" (IPLR-IG-014), provides an in-depth analysis of this framework and its implications for India’s public sector. This report is authored by Abhivardhan, Founder & Managing Partner, and interns at the Indian Society of Artificial Intelligence and Law, Yashita Parashar, Sneha Binu, and Gargi Mundotia. 📖 Access the full report here: https://indopacific.app/product/iplr-ig-014/ Why This Framework Matters India is at a pivotal moment in its AI journey, with initiatives like the IndiaAI Mission positioning the country as a global leader in ethical and inclusive AI adoption. The competency framework identifies three core skill areas—behavioral, functional, and domain-specific—that are essential for public officials navigating the complexities of AI governance. Key Highlights from the Report Behavioral Competencies Focuses on systems thinking, ethical governance, and innovative leadership to address complex societal challenges through AI. Functional Competencies Covers practical skills like risk assessment, procurement oversight, and data governance necessary for effective implementation of AI projects. Domain-Specific Competencies Tailored to high-impact sectors like healthcare, education, agriculture, urban mobility, and environmental management. Strategic Recommendations The report also provides actionable insights across three critical legal-policy dimensions: Data Policy Alignment: Ensuring privacy-by-design principles are embedded in every stage of AI deployment. Intellectual Property Management: Addressing gaps in knowledge sharing while safeguarding innovation rights. Accountability & Transparency: Establishing robust oversight mechanisms to ensure ethical use of AI technologies. Who Should Read This? This report is designed for policymakers, entrepreneurs, public officials, and citizens who want to understand how India is building capacity for responsible AI integration while addressing global challenges like bias mitigation and data privacy. 📖 Access the full report here: https://indopacific.app/product/iplr-ig-014/

  • ciarb Guideline on the Use of AI in Arbitration (2025), Explained

    This insight is co-authored by Vishwam Jindal, Chief Executive Officer, WebNyay. The Chartered Institute of Arbitrators (CIArb) guideline on the use of AI in arbitration, published in 2025, provides a detailed framework for integrating AI into arbitration proceedings. This analysis covers every chapter, highlighting what each includes and identifying potential gaps. Below, we break down the key sections for clarity, followed by a detailed survey note for a deeper understanding. Chapter-by-Chapter Analysis Part I: Benefits and Risks: Details AI's advantages (e.g., legal research, data analysis) and risks (e.g., confidentiality, bias), providing a broad overview. Part II: General Recommendations: Advises on due diligence, risk-benefit analysis, legal compliance, and maintaining accountability for AI use. Part III: Parties’ Use of AI: Covers arbitrators' powers to regulate AI, party autonomy in agreeing on its use, and disclosure requirements for transparency. Part IV: Use of AI by Arbitrators: Allows discretionary AI use for efficiency, prohibits decision delegation, and emphasizes transparency through party consultation. Appendices: Includes templates for AI use agreements and procedural orders, aiding practical implementation. Definitions: Provides clear definitions for terms like AI, hallucination, and tribunal, based on industry standards. On definitions, it could have been better that ciarb could have opted definitions associated on AI from third-party technical forums like IEEE, Creative Commons, ISO etc., instead of IBM. Part I: Benefits and Risks Part I provides a balanced view of AI's potential benefits and risks in arbitration. The benefits section (1.1-1.10) highlights efficiency gains through legal research enhancement, data analysis capabilities, text generation assistance, evidence collection streamlining, and translation/transcription improvements. Notably, section 1.10 acknowledges AI's potential to remedy "inequality of arms" by providing affordable resources to under-resourced parties. The risks section (2.1-2.9) addresses significant concerns including confidentiality breaches when using third-party AI tools, data integrity and cybersecurity vulnerabilities, impartiality issues arising from algorithmic bias, due process risks, the "black box" problem of AI opacity, enforceability risks for arbitral awards, and environmental impacts of energy-intensive AI systems. Benefits Now, AI offers transformative potential in arbitration by enhancing efficiency and quality across various stages of the process: Legal Research : AI-powered tools outperform traditional search engines with their adaptability and predictive capabilities, enabling faster and more precise research. Data Analysis : AI tools can process large datasets to identify patterns, correlations, and inconsistencies, aiding in case preparation. Text Generation : Tools can draft, summarize, and refine documents while ensuring grammatical accuracy and coherence. Translation and Transcription : AI facilitates multilingual arbitration by translating documents and transcribing hearings at lower costs. Case Analysis : Predictive analytics provide insights into case outcomes and procedural strategies. Evidence Collection : AI streamlines evidence gathering and verification, including detecting deep fakes or fabricated evidence. Risks Despite its advantages, AI introduces several risks: Confidentiality : Inputting sensitive data into third-party AI tools raises concerns about data security and misuse. Bias : Algorithmic bias can compromise impartiality if datasets or algorithms are flawed. Due Process : Over-reliance on AI tools may undermine parties' ability to present their cases fully. "Black Box" Problem : The opaque nature of some AI algorithms can hinder transparency and accountability. Enforceability : The use of banned or restricted AI tools in certain jurisdictions could jeopardise the validity of arbitral awards. Limitations in Part 1 Part I exhibits several significant limitations that undermine its comprehensiveness: Incomplete treatment of risks : While identifying key risk categories, the guidelines lack depth in addressing bias detection and mitigation strategies, transparency mechanisms, and AI explainability challenges. Gaps in benefits coverage : The incomplete presentation of sections 1.5-1.9 suggests missing analysis of potential benefits such as evidence gathering and authentication applications. Absence of risk assessment framework : No structured methodology is provided for quantitatively evaluating the likelihood and severity of identified risks, leaving arbitrators without clear guidance on risk prioritisation. Limited forward-looking analysis : The section focuses primarily on current AI capabilities without adequately addressing how rapidly evolving AI technologies might create new benefits or risks in the near future. Part II: General Recommendations The CIArb guidelines emphasise a cautious yet proactive approach to AI use: Due Diligence : Arbitrators and parties should thoroughly understand any AI tool's functionality, risks, and legal implications before using it. Balancing Benefits and Risks : Users must weigh efficiency gains against potential threats to due process, confidentiality, or fairness. Accountability : The use of AI should not diminish the responsibility or accountability of parties or arbitrators. In summary, Part II establishes broad principles for AI adoption in arbitration. It encourages participants to conduct reasonable inquiries about AI tools' technology and function (3.1), weigh benefits against risks (3.2), investigate applicable AI regulations (3.3), and maintain responsibility despite AI use (3.4). The section addresses critical issues like AI "hallucinations" (factually incorrect outputs) and prohibits arbitrators from delegating decision-making responsibilities to AI systems. Part II provides general advice on due diligence, risk assessment, legal compliance, and accountability for AI use. However, it has notable gaps: Lack of Specific Implementation Guidance: The recommendations, such as conducting inquiries into AI tools (3.1), are broad and lack practical tools like checklists or frameworks. For example, it could include a step-by-step guide for evaluating AI tool security or a risk-benefit analysis template, aiding users in application. Insufficient technical implementation guidance : The recommendations remain abstract without providing specific technical protocols for different types of AI tools or use cases. No Examples or Hypothetical / Real Case Studies: Without real-world scenarios or even comparable hypothetical scenarios, such as how a party assessed an AI tool for confidentiality risks, practitioners may struggle to apply the recommendations. Hypothetical examples could bridge this gap, enhancing understanding. Absence of AI literacy standards : No baseline competency requirements are established for arbitration participants using AI tools, creating potential disparities in understanding and application. Missing protocols for AI transparency : The guidelines don't specify concrete mechanisms to make AI processes comprehensible to all parties, particularly important given the "black box" problem acknowledged elsewhere. No Mechanism for Periodic Review: Similar to Part I, there is no provision for regularly updating the recommendations, such as a biennial review process, which is critical given AI's rapid evolution, like the advent of generative AI models. Lack of Input from Technology Experts: The guideline does not indicate consultation with AI specialists or technologists, such as input from organizations like the IEEE ( IEEE AI Ethics ), which could ensure the recommendations reflect current industry practices and technological realities. Part III: Parties’ Use of AI Arbitrators’ Powers Arbitrators have broad authority to regulate parties' use of AI: They may issue procedural orders requiring disclosure of AI use if it impacts evidence or proceedings. Arbitrators can appoint experts to assess specific AI tools or their implications for a case. Party Autonomy Parties retain significant autonomy to agree on the permissible scope of AI use in arbitration. Arbitrators are encouraged to facilitate discussions about potential risks and benefits during case management conferences. Disclosure Requirements Parties may be required to disclose their use of AI tools to preserve procedural integrity. Non-compliance with disclosure obligations could lead to adverse inferences or cost penalties. In summary, Part III establishes a framework for regulating parties' AI use. Section 4 outlines arbitrators' powers to direct and regulate AI use, including appointing AI experts (4.2), preserving procedural integrity (4.3), requiring disclosure (4.4), and enforcing compliance (4.7). Section 5 respects party autonomy in AI decisions while encouraging proactive discussion of AI parameters. Sections 6 and 7 address rulings on AI admissibility and disclosure requirements respectively. Part III contains several problematic gaps: Ambiguity in Private vs. Procedural AI Use: Section 4.5 states arbitrators cannot regulate private use unless it interferes with proceedings, but the boundary is vague. For example, using AI for internal strategy could blur lines, and clearer definitions are needed. Inadequate dispute resolution mechanisms : Despite acknowledging potential disagreements over AI use, the guidelines lack specific procedures for efficiently resolving such disputes. Disclosure framework tensions : The optional nature of disclosure creates uncertainty about when transparency should prevail over party discretion, potentially undermining procedural fairness. Absence of cost allocation guidance : The guidelines don't address how costs related to AI tools or AI-related disputes should be allocated between parties. Limited cross-border regulatory guidance : Insufficient attention is paid to navigating conflicts between different jurisdictions' AI regulations, a critical issue in international arbitration. Potential Issues with Over-Reliance on Party Consent: The emphasis on party agreement (Section 5) might limit arbitrators’ ability to act decisively if parties disagree, especially if one party lacks technical expertise, potentially undermining procedural integrity. Need for Detailed Criteria for Selecting AI Experts: While arbitrators can appoint AI experts, there are no specific criteria, such as qualifications in AI ethics or experience in arbitration, which could ensure expert suitability and consistency. Part IV: Use of AI by Arbitrators Discretionary Use Arbitrators may leverage AI tools to enhance efficiency but must ensure: Independent judgment is maintained. Tasks such as legal analysis or decision-making are not delegated entirely to AI. Transparency Arbitrators are encouraged to consult parties before using any AI tool. If parties object, arbitrators should refrain from using that tool unless all concerns are addressed. Responsibility Regardless of AI involvement, arbitrators remain fully accountable for all decisions and awards issued. In summary, Part IV addresses arbitrators' AI usage, establishing that arbitrators may employ AI to enhance efficiency (8.1) but must not relinquish decision-making authority (8.2), must verify AI outputs independently (8.3), and must assume full responsibility for awards regardless of AI assistance (8.4). Section 9 emphasises transparency through consultation with parties (9.1) and other tribunal members (9.2). Part IV exhibits several notable limitations: Inadequate technical implementation guidance : The section provides general principles without specific technical protocols for different AI applications in arbitrator decision-making. Missing AI literacy standards for arbitrators : No baseline competency requirements are established to ensure arbitrators sufficiently understand the AI tools they employ. Insufficient documentation requirements : The guidelines don't specify how arbitrators should document AI influence on their decision-making process in awards or orders. Absence of practical examples : Without concrete illustrations of appropriate versus inappropriate AI use by arbitrators, the guidance remains abstract and difficult to apply. Underdeveloped bias mitigation framework : While acknowledging potential confirmation bias, the guidelines lack specific strategies for detecting and counteracting such biases. Appendix A: Agreement on the Use of AI in Arbitration Appendix A provides a template agreement for parties to formalize AI use parameters, including sections on permitted AI tools, authorized uses, disclosure obligations, confidentiality preservation, and tribunal AI use1. Critical Deficiencies Appendix A falls short in several areas: Excessive generality : The template may be too generic for complex or specialised AI applications, potentially failing to address nuanced requirements of different arbitration contexts. Limited customisation guidance : No framework is provided for adapting the template to different types of arbitration or technological capabilities of the parties. Poor institutional integration : The template doesn't adequately address how it interfaces with various institutional arbitration rules that may have their own technology provisions. Static nature : No provisions exist for updating the agreement as AI capabilities evolve during potentially lengthy proceedings. Insufficient technical validation mechanisms : The template lacks provisions for verifying technical compliance with agreed AI parameters. Appendix B: Procedural Order on the Use of AI in Arbitration Appendix B provides both short-form and long-form templates for arbitrators to issue procedural orders on AI use, introducing the concept of "High Risk AI Use" requiring mandatory disclosure, establishing procedural steps for transparency, and enabling parties to comment on proposed AI applications. Critical Deficiencies Appendix B contains several notable gaps: Technology adaptation limitations : The templates lack mechanisms for addressing emerging AI technologies that may develop during proceedings. Enforcement uncertainty : Limited guidance is provided on monitoring and enforcing compliance with AI-related orders. Insufficient technical validation : The templates don't establish concrete mechanisms for verifying adherence to AI usage restrictions. Absence of update protocols : No provisions exist for modifying orders as AI capabilities evolve during proceedings. Limited remedial options : Beyond adverse inferences and costs, few specific remedies are provided for addressing non-compliance. Conclusion: Actionable Recommendations for Enhancement The CIArb AI Guideline represents a significant first step toward establishing a framework for AI integration in arbitration, demonstrating awareness of both benefits and risks while respecting party autonomy. However, to transform this preliminary framework into a robust and practical tool, several enhancements are necessary: Technical Implementation Framework : Develop supplementary technical guidelines with specific protocols for AI verification, validation, and explainability across different arbitration contexts and AI applications. AI Literacy Standards : Establish minimum competency requirements and educational resources for arbitrators and practitioners to ensure informed decision-making about AI tools. Adaptability Mechanisms : Implement a formal revision process with specific timelines for guideline updates to address rapidly evolving AI capabilities. Transparency Protocols : Create more detailed transparency requirements with clearer thresholds for mandatory disclosure to balance flexibility with procedural fairness. Risk Assessment Methodology : Develop a quantitative framework for systematically evaluating AI risks in different arbitration contexts. Practical Examples Library : Supplement each section with concrete case studies illustrating appropriate and inappropriate AI applications in arbitration. Institutional Integration Guidance : Provide specific recommendations for aligning these guidelines with existing institutional arbitration rules.

  • Does Art Law in India Require Regulation? Maybe.

    India, a country with an unparalleled artistic heritage, faces unique legal challenges in regulating its growing art market. While existing laws protect antiquities and govern intellectual property, the lack of a dedicated regulatory body for art has led to gaps in dispute resolution, authentication, taxation, and trade compliance. Moreover the rise of digital art and NFTs has introduced complexities and intricacies that Indian laws are yet to fully address and dealt with. Without proper oversight, artists, collectors, and investors navigate a market that is often ambiguous and vulnerable to exploitation. This article highlights these pressing issues and the crucial role of arbitration, mediation, and regulatory reforms in shaping a more structured and secure art ecosystem. India's Art Industry at Loggerheads? The Indian art industry remains largely unregulated, leading to issues such as forgery, misrepresentation, and an unclear dispute resolution mechanism. Without a formal authentication authority, buyers and collectors often struggle to verify the provenance of artworks, increasing the risk of fraud, duplicate art works . This lack of oversight has allowed counterfeit artworks to flood the market, eroding trust and making transactions riskier for both buyers and sellers. Adding to these concerns is the absence of regulated pricing and taxation policies, making it difficult for artists and buyers to navigate legal obligations. Unlike other industries that benefit from structured oversight, art transactions in India remain fragmented, leading to inconsistent taxation and compliance challenges. Many deals occur in informal markets, where tax evasion and opaque pricing structures prevail. Without a dedicated Art Regulatory Authority, buyers rely on informal networks for provenance verification, and disputes often escalate into prolonged litigation. The lack of streamlined governance and regulations in the art market highlights the requirement for a structured regulatory framework that can ensure transparency, fairness, and accountability in all aspects of art trade and ownership. In India however, art arbitration looks at an interplay between, intellectual property rights and arbitration laws. As per the Arbitration and Conciliation Act, awards are unenforceable if they arise out of an “in-arbitrable” dispute. Art disputes involve issues of ownership, authenticity and copyright infringement, succession, and testamentary matter, therefore are often contested as being in-arbitrable   Art disputes often involve complex issues, including authorship claims, forgery allegations, and breach of contractual terms. Given the time-consuming nature of traditional litigation, arbitration and mediation have become preferred modes of dispute resolution in the global art market. These mechanisms provide a faster, more cost-effective, and confidential approach to resolving conflicts without jeopardizing artistic or commercial relationships. Mediation allows parties to reach a mutually acceptable settlement while preserving professional relationships. This is particularly useful in cases involving artist-gallery disputes, copyright infringements, and ownership claims. A mediated resolution ensures that creative partnerships remain intact, preventing long legal battles from hindering artistic growth. Arbitration, on the other hand, ensures confidential, specialised, and enforceable decisions, making it ideal for high-value transactions. Art-related disputes often involve international parties, and arbitration provides a neutral forum for resolution. Institutions such as the Delhi International Arbitration Centre (DIAC) and Mumbai Centre for International Arbitration (MCIA) have begun handling art-related disputes, yet India still lacks dedicated arbitral rules for art transactions. By integrating alternative dispute resolution mechanisms into the art industry, India can ensure faster dispute resolution, and stronger legal safeguards for artists, collectors. With the rise of blockchain technology, digital art and NFTs (Non-Fungible Tokens) have opened new avenues for artists to monetise their work. However, Indian law remains silent on key aspects, leading to challenges in ownership rights, resale royalties, and tax implications. One of the biggest concerns is ownership rights who holds the copyright for an NFT, the artist or the buyer? Traditional art markets recognize artists' rights to their works, but if we talk about in the digital space, the legal standing of NFT ownership is still unregulated . More of that, there is ambiguity surrounding resale royalties, where artists often do not receive reward and compensation  when their NFT is resold in the secondary market. In the absence of clear legal provisions, artists are often at the mercy of marketplace policies. Tax implications also remain undecided. Are NFTs classified as goods, securities, or digital assets under Indian law? The lack of proper classification results in taxation challenges, leaving buyers and sellers in a legal gray area. Without a defined legal framework, NFT transactions could potentially fall under multiple tax regulations, leading to confusion and unintended liabilities. A lack of regulation has led to instances of digital art theft, plagiarism, and   unauthorized commercial use, leaving artists vulnerable. The rise of AI-generated art and digital manipulation further complicates the legal landscape.   The international art trade is heavily regulated, and India has multiple laws governing the import and export of artworks. However, enforcement gaps have led to a thriving underground market where valuable artifacts bypass legal scrutiny. The Foreign Exchange Management Act (FEMA) 1999 governs cross-border transactions. Restrictions on foreign direct investment (FDI) in the art sector limit global collaborations, while compliance with Reserve Bank of India (RBI) regulations is mandatory. The Goods and Services Tax (GST) applies to artworks. Original paintings and sculptures attract 12% GST, while prints and reproductions are taxed at 18% GST (Ministry of Finance, Government of India). High taxes encourage informal trade and underreporting, impacting transparency. The Consumer Protection Act, 2019 protects buyers from misrepresentation and fraud, particularly in online sales (Department of Consumer Affairs, India). However, lack of a formal certification authority makes enforcement difficult. The Customs Tariff Act, 1975, governs import duties and requires special permits for antique exports (Central Board of Indirect Taxes and Customs). Stronger inter-agency collaboration is needed to curb illegal art trafficking and reclaim stolen heritage. Conclusion Art law in India is at a crossroads, requiring urgent regulatory intervention to balance cultural preservation with modern commercial needs. By establishing a dedicated regulatory body, modernizing legal frameworks, and integrating alternative dispute resolution mechanisms, India can create a more structured and globally competitive art market.

  • Excited to share: The Indo-Pacific Principles of Legal & Policy Writing - Our Blueprint for Tech Law Excellence! ✨📊🤖

    At Indic Pacific Legal Research, we're thrilled to present a set of guiding writing standards crafted to elevate legal and policy communication in the tech space. Get the writing guidelines at indicpacific.com/guidelines . In today's complex AI governance landscape, clear communication isn't just nice-to-have - it's essential. 🔍 Why these principles matter now more than ever: 1️⃣ Precision Over Prolixity ⚡ - As India develops its AI regulatory framework (like MeitY's recent report on AI Governance Guidelines), our work requires communication that cuts through complexity. Every word must earn its place! 2️⃣ Nuance and Novelty Matter 💡 - Our projects demonstrate our commitment to original thinking over redundant reviews. 3️⃣ Be Unassailable 🛡️ - Our consultancy work demands arguments sharp enough to cut through noise yet grounded in reality - essential when advising on AI governance frameworks. 4️⃣ Clarity Is Authority 📣 - We've learned that complex tech law ideas demand simple expressions. If readers struggle, we haven't mastered our craft! 5️⃣ Visuals Amplify Words 📊 - Our "Graphics-powered Insights" service exemplifies how diagrams and visuals can enhance understanding of complex AI governance issues. 6️⃣ Always Anchor in Relevance 🎯 - Our approach to "whole-of-government" AI regulation demonstrates how every idea must drive home a purpose. 7️⃣ Respect the Reader's Time ⏱️ - We prioritise purposeful precision that both informs and engages. These principles guide our advisory work with tech companies and government stakeholders as we navigate India's evolving AI ecosystem. They're not just writing rules - they're the foundation of responsible tech governance! 🌐 As our founder Abhivardhan  says: "Complex ideas demand simple expressions." This philosophy powers our work in technology law, AI governance, and policy development across India and beyond. What principles guide YOUR communication in the tech policy space? Share below! 👇

  • New Publication: Artificial Intelligence and Policy in India, Volume 6

    Proud to announce our latest publication: "Artificial Intelligence and Policy in India, Volume 6," edited by Abhivardhan! 🎉📘 This research collection represents our continued commitment to exploring the frontier of AI governance and implementation in India. 🇮🇳🤖 Read this collection at https://indopacific.app/product/artificial-intelligence-and-policy-in-india-volume-6-aipi-v6/ In collaboration with the Indian Society of Artificial Intelligence and Law (ISAIL), we've brought together four exceptional papers from talented ISAIL interns: 🔹 Rasleen Kaur Dua tackles ethical and regulatory challenges in AI-driven supply chains 🔹 Parvathy Arun explores how algorithms are revolutionizing financial trading 🔹 Oshi Yadav investigates blockchain's transformative role in our digital economy 🔹 Eva Mathur examines how legal education must evolve in the age of technology This volume is essential reading for anyone interested in understanding how AI is reshaping India's policy landscape across multiple sectors. 📊⚖️💡 Available now! Tag someone who needs this resource in their professional library.

  • New Report: Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001

    We are eager to release the Vidhitsa Law Institute's first technical report, on artificial intelligence hype and its legal-economic risks. Bhavana J Sekhar, Principal Researcher and Poulomi Chatterjee, Contributing Researcher have co-authored this report with me. In this work, we have addressed the issue of hype cycles caused by artificial intelligence technologies in detail. This report is an initial research contribution developed by the team of Vidhitsa Law Institute of Global and Technology Affairs (VLiGTA) as a part of the efforts in the Artificial Intelligence Resilience department. We have continued our work which we had started in the Indian Society of Artificial Intelligence and Law (ISAIL) since 2021, on formalising ethics research on the trend of Artificial Intelligence hype. In my discussions and consultations with Dr Jeffrey Funk , a former Faculty at the National University of Singapore, Bogdan Grigorescu , a tech industry expert and an ISAIL Alumnus and Dr Richard Self from the University of Derby, I realised that it is necessary to cater to encapsulate the scope and extent of Artificial Intelligence hype beyond competition policy and data privacy issues, which many developed countries in the D9 group of countries have already faced. Many technology companies inflate their valuations and use Artificial Intelligence to hype their products and services’ value. This however can be done by influencing stocks, distorting perceptions, misdirecting demand and credibility concerns and other methods as well. The key to the exploitative nature of AI hype as we know them is based on the interconnectedness of the information and digital economy and how minuscule economic and ethical innovations in AI as a technology, can be abused. Bhavana’s Market Analysis is succinct and focuses on the points of convergence and Poulomi’s evaluation of the ethics of Artificial Intelligence is appreciated. I express my special regards to Sanad Arora from the Vidhitsa Law Institute and Ayush Kumar Rathore from Indic Pacific’s Technology Team for their moral support. Some of the key aspects discussed in report are about the perpetuation of the hype cycles and their formalisation in the legal rubric for regulators. We have also focused with a soft law perception to address larger economic and technical issues and offered recommendations. Based on our research, we have formulated seven working conditions to determine artificial intelligence hype , which are based on a set of stages: Stage 1: Influence or Generation Determination An Artificial Intelligence hype cycle is perpetuated to influence or generate market perception in a real-time scenario such that a class of Artificial Intelligence technology as a product / service is used in a participatory or preparatory sense to influence or generate the hype cycle. Stage 2: Influencing or Generating Market Perceptions & Conditions The hype cycle may be continuous or erratic, but the real-time impact on market perceptions which affect the market of the product / services involving Artificial Intelligence technologies, as estimated from a standardised / regulatory / judicial / statutory point of view. The hype cycle may directly or indirectly perpetuate the course of specific anti-competitive practices. Beyond the real-time impact on market perceptions, the consecutive effects of the real-time impact may distort a limited set of related markets, provided that the specific anti-competitive practices are furthered in a distinct pattern. Stage 3: Uninformed or Disinformed Markets The features of the product / service subject to hype cycle are uninformed / disinformed to the market. It may be stated that misinforming the market may be construed as keeping the market just uninformed, except not in mutually exclusive cases. Stage 4: Misdirected Perceptions in the Information & Digital Economy The hype cycle may be used to distract the information economy by converting the state of being uninformed or disinformed into misdirected perception. This means that the hype cycle about a product or service may not clarify certain specifics and may cause the public or market players to distract their focus towards ancillary considerations, to comfortably ignore the fact that they have being uninformed or disinformed. Stage 5: Estimation of the Hype Cycle through Risk Determination In addition, even if preliminary clarifications or assessments are provided to the market, the lack of due diligence in determining the inexplicable features of the Artificial Intelligence technology in any form or means as a part of the product or service involves the assessment of the hype cycle with a risk-centric approach. Further interpretation and explanations have been provided in the report. Recommendations in this Report Companies must make it clear to the regulatory bodies on the investment and ethical design of the products and services which involve narrow AI and high-intensive AI technologies. Maintaining efficient knowledge management systems catering to IP issues is important. It is essential that the economic and ethical repercussions of the biproducts of knowledge management are addressed carefully due to the case that many Artificial Intelligence technologies still would remain inexplicable due to reasons including ethical ambiguity. If Artificial Intelligence technologies are included at any managerial level groups, departments and divisions, which also includes the board of directors for consultative, reliance or any other tangible cause, then regardless of their attribution to the knowledge management systems maintained by the company itself, including concerns on intellectual property, a risk-oriented practice of maintaining legitimate and viable transparency on issues around data protection & privacy and algorithmic activities & operations must be adopted. Regulators can adopt for self-regulatory directives or solutions. In case regulatory sandboxes are necessary to be used, there must be separate guidelines (since they are not products or services) for such kinds of technologies by virtue of their use case in the realm of corporate governance. The transboundary flow of data, based on some commonalities of ethical and quality assessment, can be agreed amongst various countries subject to their data localisation and quality policies. When it comes to Artificial Intelligence technologies, to reduce or detect the impact and aftermath of Artificial Intelligence hype cycles – governments must negotiate on agreeing for an ethical free flow of data and by mapping certain algorithmic activities & operations which affect public welfare on a case-to-case basis. We propose that the Working Conditions to Determine Artificial Intelligence Hype can be regarded in a consultative sense a framework to intermix competition policy and technology governance concerns, by various stakeholders. We are open to consultation, feedback and alternate opinions. We also propose that the Model Algorithmic Ethics Standards (MAES) to be put into use, so that some estimations, can be made at a preliminary level as regulatory sandboxes are subject to procurement. The Report is available here . Price: 200 INR

bottom of page