top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

Writer's pictureAbhivardhan

ChatGPT & the Problem with Derivatives as Solutions

ChatGPT has been embroiled in several controversies related to the AI-based digital products and services market. Concurrently, Google has introduced BARD, a ChatGPT competitor that draws inspiration from their LaMDA conversation technology. Now, there are multiple such use cases which have been proposed by entrepreneurs, content creators and big technology companies. The problem, however, comes in when the understanding of novelty & value behind such derivative "solutions" is not properly assessed. This is where the problem of using ChatGPT as a means to cater such business solutions comes in.


In this article, I have analysed the legal and ethical aspects of designating derivatives made out of ChatGPT as kinds of digital products and solutions one could anticipate. The idea of this article is to formulate a legal approach to look at this practice only to see if some legitimate solutions can be generated from this approach. Henceforth, I have divided the line of enquiry into 2 parts:

  • What could be the Derivatives or Sub-products that could be made via ChatGPT as potential solutions? How does it work in the market? What disruptions could it have at an observational level?

  • How does this attempt to democratise ChatGPT via creating digital products/services as derivatives (or derivatives of derivatives) affect the future of work at fundamental and operative levels?

I would also add that this article is limited to how Generative AI tools can be democratised to build derivative products/services as commercial solutions, and it does not cover other types of narrow AI applications.

To know about ChatGPT and its impact on Technology Governance, read this article.

The Basics of Creating Derivatives as Solutions


When ChatGPT was made available for a Free Research Preview, which is still the current case, it was obvious for anyone to discern how many use cases could be observed or figured out, to provide services of many kinds. In my previous article on ChatGPT, I had discussed about DoNotPay as a proposed "use-case" to draft legal instruments and documents for normal use, such as civil liability, consumer law action, etc., especially for those who live in the United States, considering the exorbitant costs of handling such matters in the US itself. Here is a figure which explains how one can conceive a derivative product or a derivative (of a derivative) product.

Derivative of ChatGPT or Derivative of Derivatives
Figure 1: Derivative of ChatGPT or Derivative of Derivatives

Now, to be clear, making a derivative is not hard. Yes, it is possible for someone to create a derivative or product out of ChatGPT, given that the underlying technology behind ChatGPT is based on artificial intelligence and natural language processing.


One way to create a derivative or product out of ChatGPT would be to use its core technology, which is a type of machine learning called "transformer models", to train a new model with a specific focus or application. For example, a company might use ChatGPT's technology to create a chatbot or virtual assistant that can answer specific types of questions or help customers navigate a particular product or service.


Another approach could be to use ChatGPT's technology as a basis for a new application that uses natural language processing. For example, a company could use the technology to develop a system that analyzes customer feedback or reviews and automatically generates summaries or sentiment analysis.

Technologically, creating a derivative or product out of ChatGPT would require expertise in artificial intelligence and natural language processing, as well as access to large amounts of relevant data. Additionally, it would require significant computing resources, as training machine learning models can be computationally intensive. However, with the right expertise and resources, you can use the technology behind ChatGPT to create new and innovative products and applications.


How to Create a Derivative Product/Service out of ChatGPT?
Figure 2: How to Create a Derivative Product/Service out of ChatGPT?

Creating a derivative or product from ChatGPT or any other machine learning model involves considering various parameters. Some of the key parameters that could be important include:

  • Training Data: The quality and quantity of training data used to train the model are critical parameters in the development of a derivative or product out of ChatGPT. The dataset used must be relevant and large enough to ensure that the model can capture the necessary nuances in the input data.

  • Model Architecture: The architecture of the model is also an essential parameter to consider when creating a derivative or product out of ChatGPT. The model architecture includes the number of layers, the number of neurons per layer, the activation functions, and other design choices that can affect the model's performance.

  • Hyperparameters: Hyperparameters are additional model parameters that can be adjusted to optimize model performance. These include learning rate, batch size, optimizer, and regularization parameters, among others.

  • Evaluation Metrics: The evaluation metrics used to assess the model's performance should be relevant to the specific use case of the derivative or product. Common evaluation metrics for language models include accuracy, perplexity, and F1-score.

  • Deployment Environment: The environment in which the derivative or product will be deployed is also a critical parameter to consider. Factors such as the available computing resources, scalability, and reliability of the infrastructure can impact the effectiveness of the model in real-world use.

  • Ethics and Privacy: Considerations around the ethical and privacy implications of the derivative or product must also be taken into account. The data used to train the model must be ethically sourced and representative, and the model's deployment should not violate any privacy laws.

Let's declutter each parameter mentioned here.

  1. First, if you have training data, then it means that you have designated some parameters in the product development of that Derivative Product, which you intend to build. Now, this training data, is not the training data of ChatGPT directly. This data is attributed to the product you are making. However, there is no doubt that ChatGPT would be capable to estimate how this training data is used and is workable, at least at some level. In the lexicon of artificial intelligence ethics, training data comes under the ambit of ethical issues. The best one could proposed here is to understand how a derivative product could understand the training data to generate outputs. That is something which may require either a regulatory sandbox or another potential regulatory mechanism in place.

  2. Second, a model architecture may not be a generalised in the lexicon of AI ethics, unless it becomes necessary quantitatively where too many use cases are being tested, which have an adverse impact on the human environment.

  3. Third, hyperparameters become important but if their use case or market distribution, even for testing purposes is insignificant, then the least that could be expected is to create the technological safeguards by default and design, which are helpful.

  4. Fourth, Evaluation metrics, could be considered important, to achieve AI explainability. While training data needs to be proper and avoid those biases which could create adverse outcomes, evaluation metrics can be treated as an addition to understand how training data is being used.

  5. Fifth, the deployment environment is connected to how evaluation metrics reflect upon the effectiveness of the environment in which that derivative AI product is being tested. The better it is done, and the clearer it is understood, the safer it could be know if the training data is effective.

  6. Lastly, ethical and privacy concerns are obvious to happen and there is no doubt that such basic safeguards need to be maintained. However, a better measure to track down ethical concerns (since for privacy, design & default can be built) is to foresee the risks attached. There is a quantitative element to it, and that could be really helpful.

Overall, there are many solutions and possibilities. However, data scientists and experts in Generative AI believe that the use cases of ChatGPT in the form of derivatives are over-hyped, and many a times they might not be that useful or perfect as aimed. A former Google AI Ethicist remarks about the use of proprietary information by ChatGPT and other LLM platforms, which raises concerns on data rights and anti-competitive practices:

She said that the data used to train these models (GPT-3.5, or LaMDA) is either proprietary or just scraped from the internet. “Not a lot of attention is paid to the rights of the people in those data—also referred to as Data Subjects in the EU’s Artificial Intelligence Act—and also the people who have created those data, including artists, writers, etc.,” said Hanna, explaining that these people are not getting compensated and most companies are considering it like an afterthought.

Let's now understand if creating such derivatives or derivatives of derivatives could affect market competition.


Competition Law Concerns on Derivative Products


There are multiple competition law concerns that may emerge when such derivative products and solutions are created. Although the concerns may saturate, a workable understanding is necessary to understand the hasty and overrated use of Generative AI tools.


To make it simple, I have categorised the Ethical Dilemmas with Explanations in the form of a table.

Ethical Dilemmas

Explanation

Dominant Market Position

Creating a derivative or product out of ChatGPT that dominates a particular market can lead to concerns about anti-competitive behavior. Companies must be mindful of the potential impact of their products on market competition and ensure that they comply with applicable competition laws and regulations.

Exclusive Agreements

Companies may use exclusive agreements to limit competition in a particular market, which could be seen as anti-competitive behavior. Creating derivatives or products out of ChatGPT that rely on exclusive agreements could give rise to concerns about anti-competitive behavior.

Price Fixing

Companies must be careful not to engage in price-fixing or other anti-competitive practices when creating derivatives or products out of ChatGPT. This could include practices such as collusion with competitors, setting prices artificially high or low, or engaging in other practices that restrict competition.

Intellectual Property

Companies must be mindful of intellectual property issues when creating derivatives or products out of ChatGPT. This could include issues such as patent infringement or misappropriation of trade secrets. Companies must ensure that they have the appropriate licenses and permissions to use the intellectual property associated with ChatGPT.

Mergers and Acquisitions

Companies that create derivatives or products out of ChatGPT may engage in mergers or acquisitions that could give rise to concerns about anti-competitive behavior. Companies must ensure that their mergers and acquisitions do not harm competition in the relevant markets.

Interoperability

Companies must be careful not to engage in practices that limit interoperability when creating derivatives or products out of ChatGPT. Interoperability refers to the ability of different products and systems to work together seamlessly. Limiting interoperability can give rise to concerns about anti-competitive behavior.

Data Ownership

Companies must be mindful of data ownership issues when creating derivatives or products out of ChatGPT. This could include issues such as using data without permission or failing to compensate data owners appropriately. Companies must ensure that they have the appropriate permissions and licenses to use the data associated with ChatGPT.

Naturally, these concerns are inter-related and it was necessary to categorise them. The dilemma that could be attached is whether such tools have a market-related impact. As it was discussed in VLiGTA-TR-001, our report for the Vidhitsa Law Institute, here are the working conditions which generate Artificial Intelligence Hype:


Stage 1: Influence or Generation Determination

  • An Artificial Intelligence hype cycle is perpetuated to influence or generate market perception in a real-time scenario such that a class of Artificial Intelligence technology as a product / service is used in a participatory or preparatory sense to influence or generate the hype cycle.


Stage 2: Influencing or Generating Market Perceptions & Conditions

  • The hype cycle may be continuous or erratic, but the real-time impact on market perceptions which affect the market of the product / services involving Artificial Intelligence technologies, as estimated from a standardised / regulatory / judicial / statutory point of view.

  • The hype cycle may directly or indirectly perpetuate the course of specific anti-competitive practices.

  • Beyond the real-time impact on market perceptions, the consecutive effects of the real-time impact may distort a limited set of related markets, provided that the specific anti-competitive practices are furthered in a distinct pattern.


Stage 3: Uninformed or Disinformed Markets

  • The features of the product / service subject to hype cycle are uninformed / disinformed to the market. It may be stated that misinforming the market may be construed as keeping the market just uninformed, except not in mutually exclusive cases.


Stage 4: Misdirected Perceptions in the Information & Digital Economy

  • The hype cycle may be used to distract the information economy by converting the state of being uninformed or disinformed into misdirected perception. This means that the hype cycle about a product or service may not clarify certain specifics and may cause the public or market players to distract their focus towards ancillary considerations, to comfortably ignore the fact that they have being uninformed or disinformed.


Stage 5: Estimation of the Hype Cycle through Risk Determination

  • In addition, even if preliminary clarifications or assessments are provided to the market, the lack of due diligence in determining the inexplicable features of the Artificial Intelligence technology in any form or means as a part of the product or service involves the assessment of the hype cycle with a risk-centric approach.

Taking these working conditions into context, we may see a range of competition policy issues, which could be even relatable to Stage 3, 4 and 5 of an AI Hype cycle as per their working conditions in real life.

Here is a table which simplifies and explains how can we map the impact of such derivative products based on the working conditions of AI Hype that we have developed in VLiGTA-TR-001.

#

Stage

Example of Digital Competition Law Violation

1

Influence or Generation Determination

A company develops a derivative AI product based on ChatGPT that is marketed as superior to competing products, leading to increased demand and market share.

2

Influencing or Generating Market Perceptions & Conditions

The company engages in exclusive agreements with key customers or suppliers, or uses pricing strategies that eliminate competition.

3

Uninformed or Disinformed Markets

The company misrepresents the capabilities of the derivative AI product, making false or exaggerated claims about its performance or benefits.

4

Misdirected Perceptions in the Information & Digital Economy

The company uses the hype cycle around the derivative AI product to distract the market from other important considerations, such as data privacy or ethical concerns.

5

Estimation of the Hype Cycle through Risk Determination

The company fails to conduct proper due diligence to determine the potential risks and impact of the derivative AI product on the market and competition.

The Impact of Derivative Products on the Future of Work


Now, such derivative products created out of Generative AI solutions, could have a disruptive impact on the future of work. However, the impact may be slanted or could not be that rapid as it is thought of. Here are some ways to understand what could be the possible impact:

  1. Disruption of existing job roles: As derivative products created using Generative AI Applications become more advanced and sophisticated, they have the potential to disrupt existing job roles. For example, a derivative product created using ChatGPT could automate tasks that were previously performed by humans, potentially leading to job loss or a shift in the type of skills required for certain job roles.

  2. Creation of new job roles: On the other hand, the development of derivative products could also create new job roles that require skills in working with and developing AI technologies. For example, companies that develop derivative products may require data scientists, machine learning engineers, and AI developers to create and maintain their products.

  3. Skill development and retraining: As derivative products become more prevalent in the workforce, there may be a greater need for workers to develop new skills or undergo retraining to adapt to the changing nature of work. This could include developing skills in working with AI technologies, as well as skills in areas that are less likely to be automated, such as creativity, critical thinking, and emotional intelligence.

  4. Ethics and regulation: The development of derivative products using Generative AI Applications also raises ethical concerns and the need for regulation. Companies must ensure that their products do not violate digital competition laws or perpetuate biases that could discriminate against certain groups of people. Regulators must also monitor the market to ensure that competition laws are not violated, and to ensure that companies are transparent about the limitations and potential biases of their AI products.

Now, at a fundamental level, the development and widespread adoption of AI technologies like ChatGPT can lead to significant shifts in the labor market, as certain tasks and job functions become automated or augmented by machines. This can result in job displacement for some workers, but it can also create new opportunities for those with the skills and knowledge needed to work effectively with these technologies.


For example, the development of ChatGPT-based chatbots and virtual assistants has created new job opportunities for developers, designers, and data scientists who can create and maintain these products. Similarly, the use of ChatGPT in the healthcare industry has created new roles for clinicians and researchers who can leverage the technology to improve patient outcomes and develop new treatments.


At an operative level, the democratization of ChatGPT through derivative products and services can have a significant impact on the way people work, as well as the skills and knowledge that are required to be successful in the modern workplace. For example, the widespread use of ChatGPT-based tools for language translation and content generation has made it easier for people to communicate and create content across language barriers, but it has also placed a greater emphasis on digital literacy and technical skills.


Additionally, the development of derivative products and services based on ChatGPT has the potential to create new forms of work and new ways of working, such as remote work, freelance work, and gig work. This can provide greater flexibility and autonomy for workers, but it can also create new challenges in terms of job security and benefits.


It would however be intriguing to see how these derivative products are used to promote social and economic mobility and skill development. For example, if these products are used to provide training and educational materials to individuals who may not have access to traditional education, this could help to level the playing field and create more opportunities for people from diverse backgrounds.


Similarly, if derivative products based on ChatGPT are used to help workers transition to new roles or industries, this could help to mitigate some of the negative impacts of automation and job displacement. For example, these products could provide tailor-made training and skills development resources to help workers adapt to changing labour market demands. Improving employability and job mobility.


However, it should be emphasized that the impact of these products will ultimately depend on how they are implemented and utilized. If they are mainly used by enterprises to further automate tasks and replace workers, the impact on the future of work may be negative. However, if they are used to promote social and economic mobility and skills development, they may have a positive impact on the future of work.


From a legal perspective, the use of derivative products to promote social and economic mobility and skill development is likely to be viewed more positively than their use to automate tasks and replace human workers. These products are less likely to be perceived as threatening competition or infringing intellectual property rights. However, legal issues may still arise, such as those related to data privacy and security.


Conclusion: Too Early to Conclude

To be fair, it is still early to predict the full impact of derivative products built out of ChatGPT and other generative AI applications on the future of work. However, it is clear that the development of such products could lead to both positive and negative consequences for society, and it is important to consider these factors when evaluating the ethical and legal implications of these technologies.


On one hand, derivative products could provide new opportunities for social and economic mobility, and help people acquire new skills and knowledge. However, there is also a risk that these technologies could lead to job displacement or exacerbate existing inequalities, particularly if they are not properly regulated or managed. From a legal perspective, it would be suggestive to prepare specific legal solutions which develop consultative angles to address any anti-competitive and technology concerns.

bottom of page