top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

Writer's pictureAbhivardhan

Book Review: Disrupt with Impact by Roger Spitz

This is a review of a book recently authored by Roger Spitz, entitled, "Disrupt with Impact". Thus, it may be a short read as a book review.


Disclaimer: My review of this book is limited to my examination & analysis of the Chapters 9, 10 & 11 of the book.


 

The most important aspect of risk analysis and estimation is the relevance of any data point, inference, or trend within the context of risk estimation. If we are not clear about estimating the systemic or substantive realities surrounding any proposed data point or inference associated with risks, then our analysis will be clouded by judgments based on unclear and scattered points of assessment. The segment on the future of AI, strategic decision-making, and technology encouraged me to take a deeper look at this book and understand the purpose of writing it. Are these chapters similar to the typical chapters on AI markets, ecosystems, and communities found in other books?


It does not seem that way, simply because throughout the book, you may observe that the author, in a cautiously interesting and neutral tone, addresses certain phenomena and realities based on tangible parameters. For example, some parameters or the powerful "distinctive features of technology" are ubiquitous in their own way. I found the attribute of technology being combinatorial and fusion-oriented quite interesting because the compounding effects of technology are indeed underrated. This is because these compounding effects are based on generic and special human-technology relationships and how the progressive or limiting role of technology creates human attributes—or perhaps branches of human attributes (or maybe micro-attributes, who knows). Even if some of these attributes are not clearly discernible as trends or use case correlations, it does not discount the role of any class of technology. I also appreciate that, unlike most authors and AI experts who view technology as supposedly neutral, the book asserts a commonsensical point that no technology is neutral.


Superstupidity, technology 'substandardism' and superintelligence


The reference to the term 'superstupidity' in this book is both ironic and intriguing. The author is clear and vocal about not mincing words when pointing out how substandard AI use cases or preview-level applications may impact humans through their potential for fostering idleness. Here is an excerpt from the book:


'Maybe the existential risk is not machines taking over the world, but rather the opposite, where humans start responding like idle machines—unable to connect the emerging dots of our UN-VICE world.'

This excerpt reflects on a crucial element of the human-technology relationship and even anthropology: the evolution of human autonomy. It is praiseworthy that the author unfolds the simple yet profound point that promoting a culture of substandardism (yes, I’ve coined this word in this book review) could render the human-technology relationship so inactive that humans might be counter-anthropomorphized into 'idle machines.' The narrative raised by the author is deep. It is distinct from the usual argument that using smartphones or devices makes you lazy in a dependency-related sense when transitioning from older classes of technology to newer versions of the same class.


Between the 2000s and the 2010s, the tech transition has been exceptionally quick. However, due to technology winter, the pandemic, the transformation of social media into recommendation platforms, and the lack of channeled funding for large-scale enhanced R&D across countries (among other reasons), we are witnessing the realization of Moore's Law and aspects of the Dunning-Kruger effect from a tech and behavioral economy standpoint. The spectrum of human dependency has slowed across fields of industrial, digital, and emerging technologies, which, in my view, the author highlights in this excerpt.

"For instance, believing that AI can be a proxy for our own under- standing and decision-making as we delegate more power to algorithms is superstupid. Perhaps AI is also superstupid and may cause mistakes, wrong decisions or misalignment. Further, consider AI ineptitude. What might appear as incompetence may simply be algorithms acting on bad data."

This is why I coined the term 'substandardism' for the purposes of this book review. The author brilliantly points out elements of technology substandardism, the disproportionate human-technology relationship, and how AI tools can indeed be superstupid. This reminds me of a recent call to shift the 'paradigm of Generative AI' by moving away from text-to-speech and text-to-visual toward text-to-action, which brings to mind The Entity from Mission: Impossible 7, Bujji from Kalki 2898, and Jarvis/The Vision from the Marvel Cinematic Universe—if I may reference bits of cinema.


That being said, the responsible and pragmatic approach of the author in treating 'substandardized' (another new term I coined for this review) artificial intelligence use cases as a vector for potential risks is noteworthy. The author’s sincere writing will help anyone in the risk management or technology industry recognize the reality of technology substandardism.


The Black Mirror Effect and Anticipatory Governance


Although since COVID, the Black Mirror Effect has been frequently mentioned by people in journal articles, industry insights, social media posts and even other forms of insights, in the most generalised way, I appreciate the author to have dedicated a section of his book to Anticipatory Governance.


For example, the reference to Edward Tenner is quite intriguing to me. I think Tenner's book "Why Things Bite Back" directly addresses the concept of unintended consequences of technology. Although the book is described as "dated," it's still considered "insightful." This suggests that Tenner's observations about technology and its unintended effects have stood the test of time and remain applicable to current technological developments, including AI. Tenner's work on unintended consequences provides a bridge between the existentialist philosophy discussed earlier (Sartre's "existence precedes essence") and the practical realities of technological advancement. It helps to ground the philosophical discussion in real-world examples and consequences.


The author remains quite deliberate and cautious in differentiating two nearly distinct policy phenomena: unintended drawbacks and perverse consequences. The author illustrates this point using several examples:


  1. Air conditioning was developed to cool buildings but ended up contributing to climate change due to increased energy consumption.

  2. Industrialized agriculture aimed to provide affordable food on a large scale but led to obesity and environmental damage.

  3. Passenger airbags were introduced to save lives in car accidents but initially caused an increase in child fatalities due to automatic deployment.

  4. The Indian government's bounty program to reduce the cobra population backfired, as people started farming cobras for the reward, and when the program was discontinued, the farmed cobras were released, worsening the problem.


This brings us to the Collingridge Dilemma and the 'quandary of time.' Since the hype to regulate artificial intelligence across governments has been in vogue for months now, the author hints at the possibility of using regulation or control of AI communities and developers by subjecting them to an intended form of containment. However, containing a community without estimating the potential impact at early stages is a challenging task. The author honestly points this out as an example of how anticipatory governance is on the rise, which is commendable. Here's an excerpt:


To anticipate, we must distinguish between the unintended consequences which may arguably be unavoidable, versus the unanticipated outcomes, those adverse effects which could have been anticipated and avoided. When negative externalities are unavoidable, we can still seek to manage them effectively.

The AAA Framework and the Future of Work


It seems to me that the author has been quite responsible in writing about the role of artificial intelligence in shaping the future of work, which is not surprising considering his contributions and efforts in ushering techistentialism in his own way. That being said, the reference to "Radically Human," by Daugherty and Wilson remains interesting to me. The author highlights the vision, mentioning that AI will augment and empower human expertise rather than replace it. The author is also accurate in highlighting the fact that knowledge-intensive tasks have become integral to consulting and other facets of employment & business communities. This is why I find the author's mention of AI’s influence to be "spilling over into complex cognitive functions" praiseworthy.


In a thought-provoking excerpt, the author delves into the complex and often oversimplified relationship between artificial intelligence (AI) and the future of work. In the tenth chapter of the book, the author's skepticism towards simplistic slogans that suggest AI will only replace those who cannot use it, is insightful for people in risk management & technology. The author argues that such statements fail to capture the intricate interplay between cognification, mass automation, and the evolving nature of work.


The author emphasizes the uncertainty surrounding the net impact of AI on employment, acknowledging that while experts predict a surge in opportunities, the lack of data on the future makes it difficult to make definitive predictions. The chapter underscores the need for a deeper understanding of these complex relationships to ensure a future where both humans and technology can thrive harmoniously. The chapter also highlights the author's observations on AI's increasing role in fields that traditionally require extensive education and training, such as law, accounting, insurance, finance, and medicine. The gradual automation and augmentation of these fields through generative AI are noted as significant transformations that require the integration of systems, adjustment of supply and demand, reskilling of workforces, and adaptation of regulations. It is notable that unlike most "GenAI experts", the author remains honest to enumerate on the possibilities of technology winter and the uncertain & unclear impact of AI technologies, let alone GenAI, on the skill economy.


Black Jellyfishes, Elephants & Swans


The author presents a compelling typology of risks associated with the development and deployment of artificial intelligence (AI). Drawing on vivid animal metaphors, the author categorizes these risks into three distinct types: Black Jellyfish, Black Elephants, and Black Swans. Each category represents a unique set of challenges and potential consequences that demand our attention and proactive responses.


The author begins by introducing the concept of Black Jellyfish, which are low-probability, high-impact events that grow from seemingly predictable situations into far less predictable outcomes. The author highlights several potential Black Jellyfish scenarios, such as info-ruption (the disruptive and potentially dangerous effects of information misuse), scaling bias (the amplification of discrimination and inequality through AI), and the fusion of AI and biotechnology (which could challenge the status of humans as dominant beings). These scenarios underscore the need to consider the cascading effects of AI and how they could spiral out of control.


Next, the author turns to Black Elephants, which are obvious and highly likely threats that are often ignored or downplayed due to divergent views and a lack of understanding. The author identifies several critical Black Elephants, including the need to reinvent education to keep pace with AI, the deskilling of decision-making as we delegate more responsibilities to AI systems, the potential for mass technological unemployment, and the double-edged sword of cyber insecurity. The author emphasizes the importance of mobilizing action, aligning stakeholders, and understanding the complex systems in which these risks are embedded.


Finally, the author explores the concept of Black Swans, which are unforeseeable, rare, and extremely high-impact events. The author posits several potential Black Swan scenarios, such as the development of artificial general intelligence (AGI) and superintelligent AI systems, extreme catastrophic failures resulting from interacting AI systems, and the magical discovery of cures for incurable diseases. While these events are inherently unpredictable, the author argues that we can still build resilient foundations, monitor for nonobvious signals, and implement guardrails to mitigate the potential consequences.


Throughout the tenth chapter, the author's language is both engaging and thought-provoking, drawing the reader into a deeper consideration of the risks and challenges associated with AI. The use of animal metaphors adds a layer of accessibility and memorability to the complex concepts being discussed, while also highlighting the urgency and gravity of the issues at hand.


One potential weakness of the sections on Black Jellyfish, Black Elephant and Black Swan is that they do not provide concrete examples or case studies to illustrate the risks and scenarios being discussed. While the animal metaphors are effective in capturing the reader's attention, some readers may desire more tangible evidence to support the author's claims.


The Future of Decision-Making: AI's Role and the Risk of Moral Deskilling


In the section on 'moral deskilling', the author delves into the complex relationship between artificial intelligence (AI) and human decision-making, particularly in the context of strategic decisions. The author's language is direct and engaging, drawing the reader's attention to the potential consequences of relying too heavily on AI in decision-making. By citing the Pew Research Center's cautionary statement, the author emphasizes the risk of humans becoming overly dependent on machine-driven networks, potentially leading to a decline in their ability to think independently and take action without the aid of automated systems.


Furthermore, the author introduces the concept of "moral deskilling," as described by the Markkula Center for Applied Ethics. This concept suggests that as humans increasingly rely on AI for decision-making, they may lose the ability to make moral judgments and ethical decisions independently. The author's inclusion of this concept adds depth to the discussion, prompting readers to consider the long-term implications of AI's role in decision-making.


Regarding the Pew Research Center, the author cites a report that expresses concern about the potential negative impacts of AI on human agency and capabilities. The report, titled "Concerns about human agency, evolution and survival," is part of a larger study called "Artificial Intelligence and the Future of Humans" conducted by the Pew Research Center in 2018. The study surveyed experts about their views on the potential impacts of AI on society by 2030.


The specific section cited in the highlights concerns that increasing dependence on AI could diminish human cognitive, social, and survival skills. Experts quoted in the report, such as Charles Ess from the University of Oslo and Daniel Siewiorek from Carnegie Mellon University, warn about the potential for "deskilling" as humans offload various tasks and capabilities to machines.


As for the Markkula Center for Applied Ethics, the Center has published extensively on the ethical implications of AI, including a report titled "Ethics in the Age of AI". This report, based on a survey of 3,000 Americans, found that a significant majority (86%) believe technology companies should be regulated, and 82% care whether AI is ethical or not.


Hence, the author responsibly introduces how AI tools and systems may contribute to the decision-making value chains, in a more pragmatic and straightforward fashion, which is noteworthy. The author does not hype the limited role of AI in value chains, which is really helpful, and maybe eye-opening for some. This excerpt summarises the author's exposition made in the tenth chapter of his book:


"We prefer a world where human decisions propel our species forward, where we choose the actions that lead to staying relevant. If we do not, our C-suites might find themselves replaced by an A-suite of algorithms."

It is also interesting that the author claims 'data is the new oil' of the 21st century, when he also claims that "big data does not predict anything beyond the assumption of an idealized situation in a stable system". I think the realism in the second statement quoted complements the role of data in a multipolar world, and how the data-algorithm relationship shapes risk management and facets of human autonomy.


Info-ruption and the Internet of Existence (IoE)


The reference to a kinda less mainstream term, i.e., Inforuption, by the author in the eleventh chapter of his book, seems as intriguing to me as the revelation to the idea of the Internet of Existence (IoE) was.


For instance, the author delves into the rapidly expanding world of data-driven innovations and their profound impact on our lives. The author's use of the phrase "data byting back" is a clever play on words, alluding to the idea that data is not only shaping our world but also actively influencing and potentially threatening our existence.


The author raises a crucial question: should data be treated as a language, as fundamental to our existence as our linguistic substrates? This question highlights the pervasive nature of data in our lives and suggests that our understanding of data is essential to comprehending its impact on our future.

The author presents a timeline of how software and data have evolved, starting with the digitization of business, moving to the democratization of software creation through no-code, low-code, and generative AI, and culminating in a digital universe that surpasses our physical space in importance. This timeline effectively illustrates the increasing dominance of data in our lives and the potential for software to "eat" not only the world but also humanity itself.


The author's use of the phrase "software eating humanity" is particularly striking, as it suggests that our reliance on data and software could ultimately consume us. This idea is reminiscent of the concept of technological singularity, where artificial intelligence surpasses human intelligence and control.


However, the author does not simply present a dystopian view of the future. Instead, he emphasises the importance of understanding data to articulate its impacts and make informed decisions about its governance. The excerpt concludes by highlighting the critical importance of data privacy, ethics, and governance in a world where our bodies and environments are increasingly composed of data.


Disinformation-as-a-service


There is a section in the eleventh chapter of the book where the author delves into the emerging threat of disinformation-as-a-service (DaaS). The author explains that DaaS is a criminal business model that provides highly customizable, centrally hosted disinformation services for a fee, enabling the commoditization of info-ruption. This concept is particularly alarming as it allows various bad actors, such as conspiracy theorists, political activists, and autocracies, to easily initiate disinformation campaigns that can reinforce each other by magnifying their impact.


The author's use of real-world examples, such as the QAnon groups targeting Wayfair, Netflix, Bill Gates, and 5G telecom operators, as well as the defamation lawsuits filed by Dominion Voting Systems and Smartmatic, effectively illustrates the tangible consequences of disinformation attacks on businesses. These examples demonstrate the severity of the threat and the potential for significant financial and reputational damage. I am however not invalidating or validating the subject of the legal claims in the defamation lawsuits per se. Nevertheless, the mention of these real-world examples by the author itself signifies that he has illustrated some promising examples.


The author's transition from discussing DaaS to the broader topic of cyber insecurity is well-executed, as it highlights the growing vulnerability of our digital world. The author emphasizes that cyberattacks can be launched anonymously and at minimal cost, yet have devastating consequences, affecting critical infrastructure such as power grids, healthcare systems, and government structures. The inclusion of the potential legal ramifications for companies facing lawsuits due to inadequate cybersecurity measures further underscores the urgency of addressing these threats.


The introduction of ransomware-as-a-service (RaaS) as another emerging threat is particularly compelling. The author's comparison of RaaS to enterprise software, complete with customer service for smooth ransom collection, effectively conveys the ease with which cyberattacks can now be launched. The mention of leading ransomware brands such as BlackCat, DarkSide, and LockBit potentially becoming as commonplace as well-known software companies like Microsoft, Oracle, and Adobe is a powerful and unsettling analogy that drives home the severity of the threat.


Conclusion





Overall, the book is a definitive introductory read for understanding key technological risks around emerging technologies, including artificial intelligence and others, and the author has been largely responsible in articulating the risks, trends, and phenomena in a well-packaged and well-encapsulated way. I would not regard this book as a form of industry research or an authority on the scholarship of technology policy or technology risk management, but I am clear in saying that this is a promising casket highlighting key risks, trends, and phenomena that we see in an emerging multipolar world.



 

Thanks for reading this insight.


Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train.


We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com.

Comments


bottom of page