top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

Writer's pictureAbhivardhan

Book Review: Taming Silicon Valley by Gary Marcus

This is a review of "Taming Silicon Valley: How Can We Ensure AI Works for Us", authored by Dr Gary Marcus.


 

To introduce, Dr Marcus is Emeritus Professor of Psychology and Neural Science at New York University, US. He is a leading voice in the global artificial intelligence industry, especially the United States. One may agree or disagree with his assessments of the Generative AI use cases, and trends. However, his erudite points must be considered to understand how AI trends around the Silicon Valley are documented, and understood, beyond the book’s intrinsic focus on industry & policy issues around artificial intelligence. The book, at its best, gives an opportunity to dive into the introductory problems in the global AI ecosystem, in the Silicon Valley, and in some instances, even beyond.


Mapping the Current State of ‘GenAI’ / RoughDraft AI


Dr Marcus’s book, "Mapping the Current State of ‘GenAI’ / RoughDraft AI," provides essential examples of how Generative AI (GenAI) solutions appear appealing but have significant reliability and trust issues. The author begins by demonstrating how most Business-to-Consumer (B2C) GenAI ‘solutions’ look appealing, allowing readers to explore basic examples of prompts and AI-generated content to understand the ‘appealing’ element of any B2C GenAI tool, be it in text or visuals.


The author compares the ‘Henrietta Incident’, where a misleading point about Dr Marcus led a GenAI tool to produce a plausible but error-riddled output, with an LLM alleging Elon Musk’s ‘death’ by mixing his ownership of Tesla Motors with Tesla driver fatalities. These examples highlight the shaky ground of GenAI tools in terms of reliability and trust, which many technology experts, lawyers, and policy specialists have not focused on, despite the obvious references to these errors.


The ‘Chevy Tahoe’ and ‘BOMB’ examples fascinate, showing how GenAI tools consume inputs but don’t understand their outputs. Despite patching interpretive issues, ancillary problems persist. The ‘BOMB’ example demonstrates how masked writing can bypass guardrails, as these tools fail to understand how guardrails can be circumvented.

The author responsibly avoids regarding guardrails around LLMs and GenAI as perfect. Many technology lawyers and specialists have misled people about these guardrails’ potential worldwide. The UK Government’s International Scientific Report at the Seoul AI Summit in May 2024 echoed the author’s views, noting the ineffectiveness of existing GenAI guardrails.


The book serves as a no-brainer for people to understand the hyped-up expectations associated with GenAI and the consequences associated with it. The author’s approach of not over-explaining or oversimplifying the examples makes the content more accessible and engaging for readers.


The Threats Associated with Generative AI


The author provides interesting quotations from the Russian Federation Government’s Defence Ministry and Kate Crawford from the AI Now Institute as he delves into offering a breakdown of the 12 biggest immediate threats of Generative AI.


Now, one important and underrated area of concern addressed in the sections is medical advice. Apart from deepfakes, the author’s reference to how LLM responses to medical questions were highly variable and inaccurate was necessary to discuss. This reminds us of a trend among influencers to convert their B2C-level content to handle increased consumer/client consulting queries, which could create a misinformed or disinformed engagement loop between the specialist/generalist and potential client. The author impressively refers to the problem of accidental misinformation, pointing out the ‘Garbage-in-Garbage-Out’ problem, which could drive internet traffic, especially in technical domains like STEM. The mention of citation loops of unreal case laws alludes to how Generative AI promotes a vicious and mediocre citation loop for any topic if not dealt with correctly.


In addition, the author raises an important concern around defamation risks with Generative AI. The fabrication of content used to prove defamation creates a legal dilemma, as courts may struggle to determine who should be subject to legal recourse. The book is a must-read for all major stakeholders in the Bar and Bench to understand the ‘substandardism’ associated with GenAI and its legal risks.


The author’s reference to Donald Rumsfeld’s "known knowns, known unknowns, and unknown unknowns" quote frames the potential risks associated with AI, particularly those we may not yet be aware of. Interestingly, Dr Marcus debunks myths around ‘literal extinction’ and ‘existential risk’, explaining that mere malignant training imparted to ChatGPT-like tools does not give them the ability to develop ‘genuine intentions’. He responsibly points out the risks of half-baked ideas like text-to-action to engineer second and third-order effects out of algorithmic activities enabled by Generative AI, making this book a fantastic explainer of the 12 threats of Generative AI. 

 

The Silicon Valley Groupthink and What it Means for India


[While the sections covering Silicon Valley in this book do not explicitly mention the Indian AI ecosystem in depth, I have pointed out some normal parallels, which could be relatable to a limited extent.]


The author refers to the usual hypocrisies associated with the United States-based Silicon Valley. Throughout the book, Dr Marcus has referred to the works of Soshanna Zuboff and the problem of surveillance capitalism, largely associated with the FAANG companies of North America, notably Google, Meta, and others. He provides a polite yet critical review of the promises held by companies like OpenAI and others in the larger AI research & B2C GenAI segments.


The Apple-Facebook differences emphasised by Dr Marcus are intriguing. The author highlights a key point made by Frances Haugen, a former Facebook employee turned whistleblower, about the stark contrast between Apple and Facebook in terms of their business practices and transparency. Haugen argues that Apple, selling tangible products like iPhones, cannot easily deceive the public about their offerings’ essential characteristics. In contrast, Facebook’s highly personalised social network makes it challenging for users to assess the true nature and extent of the platform’s issues.


Regarding OpenAI, the author points out how the ‘profits, schmofits’ problem, around high valuations, made companies like OpenAI and Anthropic give up their safety goals around AI building. Even in the name of AI Safety, the regurgitated ‘guardrails’ and measures have not necessarily put forward the goals of true AI Safety that well. This is why building AI Safety Institutes across the world (as well as something in the lines of CERN as recommended by the author) becomes necessary.


The author makes a reasonable assessment of the over-hyped & messianic narrative built by Silicon Valley players, highlighting how the loop of overpromise has largely guided the narrative so far. He mentions the "Oh no, China will get to GPT-5" myth spread across quarters in Washington DC, which relates to hyped-up conversations on AI and geopolitics in the Indo-Pacific, India, and the United States.


While the author mentions several relatable points around ‘slick video’ marketing and the abstract notion of ‘money gives them immense power’, it reminds me of the discourse around the Indian Digital Competition Bill. In India, the situation gets dire because most of the FAAMG companies in the B2C side have invested their resources in such a way that even if they are not profiting enough in some sectors, they are earning well by selling Indian data and providing relevant technology infrastructure.


Dr Marcus points out the intellectual failures of science popularizing movements, like effective accelerationism (e-acc). While e-acc can still be subject to interest and awe, it does not make sense in the long run, with its zero-sum mindset. The author calls out the problems in the larger Valley-based accelerationist movements.


To conclude this section, I would recommend going through a sensible response given by the CEO of Honeywell, Vimal Kapur, on how AI tools might affect hardly noticeable domains such as aerospace & energy. I believe the readers might feel more excited to read this incredible book.

 

Remembering the 19th Century and the Insistence to Regulate AI


The author's reference to quotes by Tom Wheeler and Madeleine Albright reminds me of a quote from former UK Prime Minister, Tony Blair, on a lighter note:

“My thesis about modern politics is that the key political challenge today is the technological revolution, the 21st century equivalent of the 19th century Industrial Revolution. And politics has been slow to catch up.”

While Blair's reference is largely political, the two quotes by Wheeler and Madeleine relate to the interesting commonalities between the 19th and 21st centuries.


The author provides a solid basis as to why copyright laws are important when data scraping techniques in the GenAI ecosystem do not respect the autonomy & copy-rights of the authors whose content is consumed & grasped. The reference to quotes from Ed Newton-Rex and Pete Dietert on the GenAI-copyright issue highlights the ethical and legal complexities surrounding the use of creative works in training generative AI models. Dr Marcus emphasizes the urgent need for a more nuanced and ethical approach to AI development, particularly in the realm of creative industries.


The author uses these examples to underscore a critical point: the current practices of many AI companies in harvesting and using creative works without proper permission or compensation are ethically questionable and potentially exploitative. Pete Dietert's stark warning about "digital replicants" amplifies the urgency of addressing these issues, extending the conversation beyond economic considerations to fundamental human rights, as recognised in the UNESCO Recommendation on the Ethics of AI of 2021.


Dr Marcus points out how the 'Data & Trust Alliance' webpage features appealing privacy and data protection-related legal buzzwords, but the details help in shielding companies more than protecting consumers. Such attempts of subversions are being tried in Western Europe, Northern America, and even parts of the Indo-Pacific Region, including India.


The author focuses on algorithmic transparency & source transparency among the list of demands people should make. He refers to the larger black box problem as the core basis to legally justify why interpretability measures matter. With respect to consumer law and human rights, AI interpretability (Explainable AI) becomes necessary to have a gestation phase to see if there is any interpretability of the activities regularly visible in AI systems at a pre-launch stage.


On source transparency, the author points out the role of content provenance (labelling) in enabling distinguishability between human-created content and synthetic content, so that the tendency to create "counterfeit people" is prevented and discouraged. The author refers to the problem of anthropomorphism, where many AI systems create a counterfeit perception among human beings and, via impersonation, could potentially downgrade their cognitive abilities.


Among the eight suggestions made by Dr Marcus on how people can make a difference in bettering AI governance avenues, the author makes a reasonable point that voluntary guidelines must be negotiated with major technology companies. In the case of India, there have been some self-regulatory attempts, like an AI Advisory (non-binding) in March 2024, but more consistent efforts may be implemented, starting with voluntary guidelines, with sector-specific & sector-neutral priorities.

 

Conclusion


Overall, Dr Gary Marcus has written an excellent prologue to truly ‘tame’ the Silicon Valley in the simplest way possible for anyone who is not aware of technical & legal issues around Generative AI. As recommended, this book also gives a slight glance at improving some understanding around digital competition policy measures, and the effective use of consumer law frameworks, where competition policy remains ineffective. The book is not necessarily a detailed documentation on the state of AI Hype. However, the examples, and references mentioned in the book are enough for researchers in law, economics and policy to trace out the problems associated with the American & Global AI ecosystems.

Comments


bottom of page