Please note: this article is a long-read.
The future of work and innovation, both are some of the most sensitive areas of concern in the times when the fourth industrial revolution is happening, despite a gruesome pandemic. With future of work, societies and governments fear which jobs would exist, and which would not. Artificial Intelligence and Web3 technologies create a similar perception and hence, fear over such dilemmas. Fairly, the focus point of fear is not accurate or astute enough. The narrative across markets is about Generative AI and automation taking over human labour and information-related jobs. In fact, a hilarious update has come up that even Generative AI prompters are considered to be "a kind of a profession". Now, when one looks at the future of innovation, of course, it is not necessary that every product, system or service resembles a use case to disrupt jobs and businesses that way. In fact, it could be considered that marketing technology hypes is "business friendly". However, it does not work and fails to promote innovative practices in the technology industry, especially the AI industry. In this article, I have offered a regulation-centric perspective on certain trends related to the future of work and the future of innovation with the use of artificial intelligence technologies in the present times. The article also covers on the possibility of Artificial General Intelligence to affect the Future of Work and Innovation.
The Future of Work and the Future of Innovation
The Future of Work and the Future of Innovation are two closely related concepts that are significantly impacted by the advancement of Artificial Intelligence (AI). The Future of Work refers to the changing nature of employment and work in the context of technological advancements, including AI. It encompasses the evolving skills required for jobs, the rise of automation, the growing prevalence of remote work, and the impact of AI on job displacement and creation.
AI has already begun to disrupt certain industries, such as manufacturing and transportation, by automating routine and repetitive tasks. While this has the potential to increase efficiency and productivity, it also raises concerns about job displacement and the need for reskilling and upskilling for the workforce.
In the future, AI is likely to have an increased if not significant impact on the job market. Certain professions, such as those that require analytical thinking, creativity, and emotional intelligence, are expected to be in high demand, while other jobs that are easily automated may be at risk of disappearing. It's important to note that the impact of AI on the job market is complex and will vary depending on factors such as industry, geographic location, and job type. It is more about how humans become more skilled and up-to-date.
The Future of Innovation refers to the new opportunities and possibilities for creating and advancing technology with the help of AI. AI has the potential to revolutionize many fields, from healthcare to transportation, by enabling more efficient and effective decision-making and automation. AI can be used to analyze vast amounts of data, identify patterns and insights, and provide predictions and recommendations. This can be used to optimize business processes, enhance product development, and improve customer experiences. Additionally, AI can be used to solve complex problems and accelerate scientific research, leading to new discoveries and innovations.
However, it's important to note that AI is not a silver bullet and has its limitations. AI algorithms are only as good as the data they are trained on, and biases and errors can be introduced into the system. Additionally, AI raises concerns about privacy, security, and ethical considerations that need to be carefully addressed.
Estimating Possible "Disruptions"
In Figure 2, a listing is provided which explains, from a regulatory standpoint, how would artificial intelligence could really affect the future of work and innovation. Now, this is not an exhaustive list, and second, some points may overlap for both future of work and innovation respectively. Let's discuss all these important points and deconstruct the narrative and realities around. These points, are based on my insight into the AI industry and its academia in India, Western countries and even China.
Job requirements will become complicated in some cases, simplified in other cases
Any job requirement that is posted by an entity or a government or an individual, is not reflected merely by the pay-grade / monetary compensation it offers. Money could be a factor to assess how markets are reacting and how the employment market deserves better pay. Nevertheless, the specifics of work, and then the special requirements, explain how job requirements would change. For sure, after two industrial revolutions, the quality of life is set for a change everywhere, even if the Global South countries are trying to grow. For India and the Global South, adaptation may happen if a creative outlook towards skill education is used to focus on creating those jobs and their skill sets which would stay and matter. Attrition in employment has been a problem, but could be dealt properly.
To climb the food chain, enhancing both technical and soft skills is an undeniable must
As job requirements are gradually upscaling their purpose, climbing the food chain is a must for people. One cannot stay limited to a 10-year old approach of doing tasks under their work experience since there is a chance of having some real-life instances of disruption. Investing in up-skilling would be helpful.
More technology will involve more human involvement in unimaginable areas of concern
One may assume that using artificial intelligence or any disruptive tech product, system, component or service would lead to a severe decrease in human involvement. For example, let us assume that no-code tools are developed like FlutterFlow and many more. One may create a machine learning system which recommends what to code (already happening) to reduce the work of full-stack developers. However, people forget to realise that there would be additional jobs to analyse specifics and suggest relevant solutions. In any opportunity created by the use and inclusion of artificial intelligence, some after-effects won't last. However, some of them could grow and stay for some time. The fact that AI hype is promoted in a manner lacking ethical responsibility, shows how poorly markets are understood. In addition, this is why the US markets were subject to clear disruptions which could not last long, and India also has been a victim of this, even if the proportions are not that large as those of the US.
While climbing food chain is inevitable, many at the top could go down - affecting the employment market
Many (not most) of the top stakeholders in the food chain, in various aspects - jobs, businesses, freelancing, independent work, public sector involvement - would have to readjust their priorities because this is an obvious trend to look out for. Some market changes could be quick, while some may not be that simple to ignore.
Derivative products, jobs, systems, services and opportunities will come and go regularly
As discussed in an article on ChatGPT and its Derivative Products, it is obvious that multiple kinds of derivative products, jobs, systems, services and opportunities will be created. They would come, rise, become hyped and may either stay or go.
To be clear, the derivatives we are discussing here are strictly related to the use of artificial intelligence technologies, that create jobs / opportunities / technological or non-technological systems of governance / products / services. Let us take Figure 3 in context. If we assume that Product A is an AI product, and based on some feedback related to Product A, 3 things happen: (1) a Derivative Product of Product A is created); (2) a Job or Opportunity called B is created; and (3) a Job or Opportunity called C is created - then, the necessity of having such opportunities related to the production of "A" and its derivative involves the creation of two systems - E and F. Why are these systems created? Simple. They are used for handling operations related to the production, maintenance and other related tasks, related to Product A and its derivative. The systems could be based on AI / technology or could not involve much technological prowess. Naturally, one of them (in this case System E), along with Job/ Opportunity C become stable use cases which make sense. They are practical and encouraging. This could further inspire the creation of a Product D, if possible.
Although the process and choice systems I have explained in the previous paragraph is a simplistic depiction of production and R&D issues. This whole process in real life could take 2-5 years or even 5-10 years, depending on how the process is going on.
Academic research in law and policy will remain downgraded until adaptive and sensible approaches are adopted with time
Here is an excerpt from an article by Daniel Lattier for Intellectual Takeout, which explains the state of reading social science research papers in developed countries and overall:
About 82 percent of articles published in the humanities are not even cited once for five years after they are published.
Of those articles that are cited, only 20 percent have actually been read.
Half of academic papers are never read by anyone other than their authors, peer reviewers, and journal editors.
Another point which Daniel makes is this:
Another reason is increased specialization in the modern era, which is in part due to the splitting up of universities into various disciplines and departments that each pursue their own logic. One unfortunate effect of this specialization is that the subject matter of most articles make them inaccessible to the public, and even to the overwhelming majority of professors.
In fact, those who work in the law and policy professions, could survive if they belong to the industry side of things. Academics after COVID across the world have lost the edge and appetite to write and contribute research in law, social sciences and public policy. Just because few people are able to still do it does not justify any trend.
Now, take these insights in line with the disruptions that AI may cause. If you take Generative AI, some universities across the world including in India, have banned the use of ChatGPT and other GAN/LLM tools:
According to a Hindustan Times report, the RV University ban also applies to other AI tools such as GitHub Co-Pilot and Black Box. Surprise checks will be conducted and students who are found abusing these engines will be made to redo their work on accounts of plagiarism.
The reason it is done is not just plagiarism. The academic industry is lethargic and lacks social and intellectual mobility in law and policy - which is a global problem and not just an India problem. There might be exceptional institutions, but they are less than those who are not offering enough. Now, imagine that if people are not even skilled at a basic level in their areas of law and policy, then automating tasks or the algorithmic use of any work would easily make them vulnerable and many professionals would have to upgrade their skills once they get the basics clear. In fact, it is governments and companies across the world who are trying hard to stay updated with the realities of the artificial intelligence market and produce stellar research, which includes the Government of India and some organisations in India. To counter this problem - certain things, for sure can be done:
Embrace individual mobility and brilliance by focusing on excellence catering mobilisation
Keep the pace and create skill-based learning; the academia in India is incapable to create skill opportunities in law and policy - unless institutions like Indian Arbitration & Mediation Council, the CPC Analytics and others step up, which they fortunately do
Specialisation should not be used as an excuse to prevent people from learning; education could be simulated in a sense which makes more people aware and skilled in a sensible and self-aware way
Access to resources is a critical issue which needs to be addressed because it is hilarious to see that AI systems have access to multiple research books and works, but human researchers in the Global South (and India) suffer discrimination and do not get access to research works (while publication via Scopus and others has become literally costly and impossible)
Skill institutions must be separately created; they could be really helpful in addressing the risks of having disruptive technologies, from a future of work perspective
R&D would have to be rigorous, risk and outcome-based in technology and related sectors
The hype related to Generative AI products and the call to impose a moratorium on AI research beyond GPT4 or GPT 5 for 6 months explains why big tech companies must not own the narrative and market of artificial intelligence research & commercialisation. Ron Miller discusses about the potential of the Generative AI industry to be an industry for small businesses for TechCrunch:
“Every company on the planet has a corpus of information related to their [organization]. Maybe it’s [customer] interactions, customer service; maybe it’s documents; maybe it’s the material that they published over the years. And ChatGPT does not have all of that and can’t do all of that.” [...] “To be clear, every company will have some sort of a custom dataset based on which they will do inference that actually gives them a unique edge that no one else can replicate. But that does not require every company to build a large language model. What it requires is [for companies to take advantage of] a language model that already exists,” he said.
The statement quoted above emphasises the importance of rigorous and outcome-based Research and Development (R&D) in the technology sector. It highlights that every company possesses a unique corpus of information that can be leveraged to gain a competitive edge. This corpus of information may be customer interactions, documents, or any material published by the organization over the years. It is suggested that companies do not need to build their own large language model to leverage this corpus of information. Instead, they can take advantage of existing language models, such as ChatGPT, to gain insights and make informed decisions.
The approach recommended is for companies to focus on using existing resources effectively, rather than reinventing the wheel. This can help companies save time and resources while still gaining valuable insights and improving their competitive position. However, to effectively leverage these resources, companies need to have rigorous R&D processes in place. This means focusing on outcomes and taking calculated risks to drive innovation and stay ahead of the competition. By doing so, companies can ensure that they are utilising their unique corpus of information to its fullest potential, and staying ahead in the ever-changing technology landscape.
Here is an intriguing tweet from Pranesh Prakash, respected technology law expert and researcher, on the impact of AI and jobs in the South Asian region (the Indian subcontinent). I find the points he raised quite intriguing when we take an emerging market like India (or Bangladesh) into perspective.
Here is a summary of what he refers to:
One cannot have an accurate prognostication about how generative Al would affect jobs. In certain cases, the knowledge sector in an outsourcing destination could increase, while in certain cases it decreases.
Mentally-oriented jobs (Knowledge/Arts/Music-sector jobs, etc.) will be affected first, and not manual labour jobs (for obvious reasons).
The presence of Generative Al, and other forms of neural net-based models could be omnipresent, diffused and sometimes, dissimulated as if it is just invisible as he says.
All the four points in the original tweet are valid. Issues related to knowledge and their epistemological and ontological links could really affect the mentally-oriented jobs, in any way possible. In some cases it could be a real disruptor where technological mobility is helpful, while in certain cases, it might not be useful, but could be found responsible for mere information overloads, and even epistemic trespassing (read the paper written by Nathan Ballantyne). In p. 373 of the paper, Nathan has made a valid point about how narrowed analysis could drive philosophy questions quite counterproductively.
"[Q]uestions in philosophy may become hybridized when bodies of empirical fact, experimental evidence, and empirically-driven theories are recognized to be relevant to answering those questions. As a matter of fact, the era of narrowly analysis-driven philosophy represents an anomaly within the history of philosophy."
Taking cue from Nathan's paper and Pranesh's points, it is essential to point out that how Generative AI from an epistemic & ontological aspect could be a fragile tool of use. The risk and ethics-based value that any generated proprietary information and the algorithmic activities & operations of these tools would hold, will be subject to scrutiny. So, any improbable or manipulative or dissimulated epistemic feedback one takes from such tools when it comes to their decision making practices not only causes hype which generates risks but also could affect knowledge societies and economies. Of course, the human element of responsibility is undeniable. This is why having a simplistic, focused and clear SOP (standard of procedure) for using such Generative AI tools could help in assessing what impact do these tools really have.
Now that we have covered some genuine concerns on the effect of AI on the future of work and innovation, it is necessary to analyse the role of artificial general intelligence. The reason I have covered the role of AGI is this - human motivation is appropriately tracked via methods of behavioural economics. AGI and the ethics of artificial intelligence represent a narrative around human motivation and assumed machinic motivations. The problem however is that most narrow AI we know lack explainability (beyond the usual black box problem) because how they learn is not known at all. In fact, it was just recent that scientists somehow figured out how Generative AI tools learn and understand at a limited degree, still incapable to human brain (thereby not tapping up to the potential of the "Theory of Mind" in AI Ethics). Hence, through this concluding section of the article, I have addressed a simple question: "Is AGI significant to be considered an imperative for regulation when it comes to the future of work and the future of innovation?" I hope this section would be interesting to read.
Whether AGI would Disrupt the Future of Work & Innovation
In this concluding part, I have discussed the potential role of artificial general intelligence (AGI), as to whether it can really affect the future of work and innovation. For starters, in artificial intelligence ethics, we assume that Artificial General Intelligence (AGI) refers to the hypothetical ability of an artificial intelligence system to perform any intellectual task that a human can do. As per AI ethicists, AGI would be capable of learning and adapting to any environment, just as humans do. It would have the ability to reason, solve problems, make decisions, and understand complex ideas, regardless of the context or circumstances.
In addition, AGI (allegedly) would be able to perform these tasks without being explicitly programmed to do so. It would be able to learn from experience and apply that knowledge to new situations, just as humans do. This ability to learn and adapt would be a crucial characteristic of AGI, as it would enable it to perform a wide range of tasks in a variety of contexts.
So, in simple terms, the narrative on AGI refers to safety and risk recognition. On this aspect, Jason Crawford for The Roots of Progress, refers to the 1975 Asilomar Conference and explains how risk recognition and safety measures could be developed. The excerpt from the article is indeed insightful to understand how AGI works:
A famous example of this is the 1975 Asilomar conference, where genetic engineering researchers worked out safety procedures for their experiments. While the conference was being organized, for a period of about eight months, researchers voluntarily paused certain types of experiments, so that the safety procedures could be established first. When the risk mitigation is not a procedure or protocol, but a new technology, this approach is called “differential technology development” (DTD). For instance, we could create safety against pandemics by having better rapid vaccine development platforms, or by having wastewater monitoring systems that would give us early warning against new outbreaks. The idea of DTD is to create and deploy these types of technologies before we create more powerful genetic engineering techniques or equipment that might increase the risk of pandemics.
Now, the idea behind DTD is to proactively address potential risks associated with new technologies by prioritizing the development of safety measures and strategies. By doing so, it aims to reduce the likelihood of harm and promote responsible innovation. Rohit Krishnan in Artificial General Intelligence and how (much) to worry about it for Strange Loop Canon, makes a full-fledged chart explaining how AGI as a risk would work out:
If one reads the article and looks through this amazingly curated mind map by Krishnan, it is obvious to notice that the risk of implicating any artificial general intelligence is not so simple to be estimated. The chart itself is self-explanatory, and hence I would urge the readers to go through this brilliant work. I would like to highlight the core argument that Krishnan puts, which lawyers and regulators must understand if they fear about the hype behind artificial general intelligence. This excerpt is a long read:
We need whatever system is developed to have its own goals and to act of its own accord. ChatGPT is great, but is entirely reactive. Rightfully so, because it doesn’t really have an inner “self” with its own motivations. Can I say it doesn’t have? Maybe not. Maybe the best way to say is that it doesn’t seem to show one. But our motivations came from hundreds of millions of years of evolution, each generation of which only came to propagate itself if it had a goal it optimised towards, which included at the very least survival, and more recently the ability to gather sufficient electronic goods. AI today has no such motivation. There’s an argument that motivation is internally generated based on whatever goal function you give it, subject to capability, but it’s kind of conjectural. We’ve seen snippets of where the AI does things we wouldn’t expect because its goal needed it to figure out things on its own. [...] A major lack that AI of today has is that it lives in some alternate Everettian multiversal plane instead of our world. The mistakes it makes are not wrong per se, as much as belonging to a parallel universe that differs from ours. And this is understandable. It learns everything about the world from what its given, which might be text or images or something else. But all of these are highly leaky, at least in terms of what they include within the. Which means that the algos don’t quite seem to understand the reality. It gets history wrong, it gets geography wrong, it gets physics wrong, and it gets causality wrong.
The author argues that the lack of motivation in current AI systems is a significant limitation, as AI lacks the same goal optimisation mechanisms that have evolved over hundreds of millions of years in biological organisms. While there is an argument that motivation is internally generated based on the goal function provided to AI, it remains conjectural. Additionally, the author notes that AI makes mistakes due to its limited understanding of reality, which could have implications for the development and regulation of AI, particularly with the potential risks associated with the development of AGI. Therefore, the narrative of responsible AI emphasises the importance of considering ethical, societal, and safety implications of AI, including the development of AGI, to ensure that the future of work and innovation is beneficial for all.
Now, from a regulatory standpoint, there is a growing concern that AI tools that are based on conjecture rather than a clear set of rules or principles may pose accountability challenges. The lack of clear motivation and inner self in AI makes it difficult for regulators to hold AI systems accountable for their actions. As the author suggests, AI of today lacks the ability to have its own motivations and goals, which are essential for humans to propagate and survive. While AI algorithms may have goal functions, they are subject to capability and may not be reliable in all scenarios. Additionally, AI's mistakes are often due to its limited understanding of reality, which can result in errors in history, geography, physics, and causality. Regulators may struggle to understand the motivation aspect behind AI tools, as they are often based on complex algorithms that are difficult to decipher. This makes it challenging to establish culpability in cases where AI tools make mistakes or cause harm. In many cases, regulators may not even be aware of the limitations of AI tools and the potential risks they pose due to being limited.
To conclude, an interesting approach to address such concerns and further understand (maybe to even come up with self-regulatory methods if not measures) the fear of artificial general intelligence - is perhaps doing epistemic and ontological analysis in legal thinking. In the book, Law 3.0, Roger Brownsword (Professor of King's Law College, who I had interviewed for AI Now by Indian Society of Artificial Intelligence and Law) had discussed about the ontological dilemmas if technology regulation becomes technocratic (or in simple terms, too much automatic), which justifies the need to be good at epistemic and ontological analysis:
With rapid developments in AI, machine learning, and blockchain, a question that will become increasingly important is whether (and if so, the extent to which) a community sees itself as distinguished by its commitment to governance by rule rather than by technological management. In some smaller-scale communities or self-regulating groups, there might be resistance to a technocratic approach because compliance that is guaranteed by technological means compromises the context for trust – this might be the position, for example, in some business communities (where self-enforcing transactional technologies are rejected). Or, again, a community might prefer to stick with regulation by rules because rules (unlike technological measures) allow for some interpretive flexibility, or because it values public participation in setting standards and is worried that this might be more difficult if the debate were to become technocratic. [...] Law 3.0, is more than a particular technocratic mode of reasoning, it is also a state of coexistent codes and conversations. [...] Law 3.0 conversation asks whether the legal rules are fit for purpose but it also reviews in a sustained way the non-rule technological options that might be available as a more effective means of serving regulatory purposes.
In future analyses for Visual Legal Analytica, or a VLiGTA report, perhaps such question on developing epistemic and ontological analyses could be approached. Nevertheless, on the future of work and innovation, it can be safely concluded that disruption is not the problem, but not understanding the frugality of disruption could be. This is where careful and articulate approaches are needed to analyse if there are real disruptions in the employment market or not. Perhaps there are legible corporate governance and investment law issues which could be taken under regulatory oversight apart from limited concerns on the "black box problem", which again remains obscure and ungovernable, without epistemic and ontological precision on the impact of narrow AI technologies.
Comments