top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

The European Union Artificial Intelligence Act: A Glance


European Union Artificial Intelligence Act depicted as a transhumanist imagination.
© Indic Pacific Legal Research LLP, 2023.

The 27-nation group has introduced the first AI regulations in the world two with a focus on limiting dangerous but narrowly targeted applications. Lately we have witnessed the increased role of AI in our day to day lives, and it becomes important to regulate the AI models to ensure the integrity and security of Nations. Chatbots and other general-purpose AI systems received very little attention before the coming of ChatGPT, which further reinstated the importance to regulate such models before it creates turbulence in the World Economy. The EU Commission published a proposal for an EU Artificial Intelligence Act in back April 2021, which provoked a heated debate in the EU Parliament amongst political parties, stakeholders, and EU Member States, leading to thousands of amendment proposals. The EU Parliament has approved the passage of the AI Act, which definitely evokes issues of implementation with respect to the AI legislation.


In the European Parliament, the provisional AI Act would need to be approved by the joint committee, then debated and voted on by the full Parliament, after which the AI Act is adopted into law.


The objectives of the European Union Artificial Intelligence Act are summarised as follows:

  • address risks specifically created by AI applications

  • propose a list of high-risk applications

  • set clear requirements for AI systems for high risk applications

  • define specific obligations for AI users and providers of high risk applications

  • propose a conformity assessment before the AI system is put into service or placed on the market

  • propose enforcement after such an AI system is placed in the market

  • propose a governance structure at European and national level.

Defining AI


The definitions offered by the participating governments are summarised in the FCAI report. Despite the fact that there is "no single definition" of artificial intelligence, many efforts have been made in that direction. Many attempts have been made to define the term as it will determine the scope of the Legislation. Also, it has to strike the balance between being too narrow to exclude the certain types of AI that needs regulation and too broad a definition risks sweeping up common algorithmic systems that do not produce the types of risk or harm However, the concept in the AI Act is the first definition of AI for regulatory reasons. Earlier definitions of AI appeared in frameworks, guidelines, or appropriations language. The definition that is finally established in the AI Act is likely to serve as a benchmark for other AI policies in other nations, fostering worldwide consensus.

According to Article 3(1) of the AI Act, an AI system is “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

Risk-based approach to regulate AI


A "proportionate" risk-based approach is promised by the AI Act, which imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety. The AI Act divides risk into four categories: unacceptable risk, high risk, limited risk, and low risk. These categories are targeted at particular industries and applications. One important topic under discussion by the Parliament and the Council will be the regulation and classification of applications at the higher levels, specifically those deemed to be unacceptably risky, such as social scoring, or high risk, AI interaction with with children in the context of personal development or personalised education.


The EU AI Act lays out general guidelines for the creation, commercialisation, and application of AI-driven systems, products, and services on EU soil. The proposed regulation outlines fundamental guidelines for artificial intelligence that are relevant to all fields. Through a required CE-marking process (CE marking indicates that a product has been assessed by the manufacturer and deemed to meet EU safety, health and environmental protection requirements), it establishes requirements certification of High-Risk AI Systems. This pre-market compliance regime also applies to datasets used for machine learning training, testing, and validation in order to guarantee equitable results. The Act aims to formalise the high requirements of the EU's trustworthy AI paradigm, which mandates that AI must be robust in terms of law, ethics, and technology while upholding democratic principles, human rights, and the rule of law. If we talk about India, The Companies Act 2013, lays down the compliance that needs to be met by a company but currently AI Models or the providers do not come under the ambit. India is not planning to develop AI regulatory plans at this point of time but taking inspiration from the EU legislation, it can ensure strict compliance measures for the upcoming players in the industry by taking a cue.


This risk-based based pyramid (Figure 1) is combined with a contemporary, layered enforcement mechanism in the draught Artificial Intelligence Act. This implies, among other things, that applications with a low risk are subject to a laxer legal framework, while those with a high risk are prohibited. As danger rises between these two ends of the spectrum, rules become harsher. These range from light, externally assessed compliance requirements throughout the life cycle of the application to strict, non-binding self-regulatory soft law impact evaluations coupled with codes of conduct.



Risk-based pyramids on the use of artificial intelligence
Figure 1: Risk-based pyramids on the use of artificial intelligence

Ban on the use of facial biometrics in law enforcement


Some Member States want to exclude from the AI Regulation any use of AI applications for national security purposes (the proposals exclude AI systems developed or used “exclusively” for military purpose). Germany has recently argued for ruling out remote real-time biometric identification in public spaces but allowing retrospective identification (e.g., during the evaluation of evidence), and asks for an explicit ban on the use of AI systems substituting human judges, for risk assessments by law enforcement authorities and for systematic surveillance and monitoring of employee performance.


AI-related revision of the EU Product Liability Directive (PLD)


In EU, manufacturers are subject to strict civil law liability under the PLD for damages resulting from defective products, regardless of negligence. To integrate new product categories arising from digital technologies, like AI, a modification was required. The PLD specifies conditions under which a product will be believed to be "defective" for the purposes of a claim for damages, including the presumption of a causal link if the product is proven to be defective and the damage is ordinarily consistent with that defect. With regard to AI systems, the revision of the PLD aims to clarify that:

  • AI systems and AI-enabled goods are considered “products” and are thus covered by the PLD; and

  • when AI systems are defective and cause damage to property, physical harm or data loss, the damaged party can seek no-fault compensation from the provider of the AI system or from a manufacturer integrating the system into another product.

  • providers of software and digital services affecting the functionality of products can be held liable in the same way as hardware manufacturers;

  • manufacturers can be held liable for subsequent changes made to products already placed on the market, e.g., by software updates or machine learning; and

Talking in Indian context, the Consumer Protection Act talks of product liability and marked an end of the buyer beware doctrine and the introduction of seller beware as the new doctrine governing the Consumer Protection Act. Section 84 of the Act enumerates the situations where a product manufacturer shall be liable in a claim for compensation under a product liability action for a harm caused by a defective product manufactured by the product manufacturer. But this doesn't apply to AI models currently running in India and keeping in mind the future needs, we must ensure provisions on Protection of consumers on priority basis.


Impact on Businesses


AI has enormous potential for progress in both technology and society. It is transforming how businesses produce value across a range of sectors, including healthcare, mining, and financial services. Companies must handle the risks associated with the technology if they want to use AI to innovate at the rate necessary to stay competitive and maximise the return on their AI investments. Businesses who are experiencing the greatest benefits from AI are much more likely to say that they actively manage risk than those whose outcomes are less promising.


As per the Provisions of the Act, it includes fines of up to €30 million or 6 percent of global revenue, making penalties even heftier than those incurred by violations of Regulation Act. The use of prohibited systems and the violation of the data-governance provisions when using high-risk systems will incur the largest potential fines. All other violations are subject to a lower maximum of €20 million or 4 percent of global revenue, and providing incorrect or misleading information to authorities will carry a maximum penalty of €10 million or 2 percent of global revenue.


Although enforcement rests with member states, as is the case for GDPR, it is expected that the penalties will be phased in, with the initial enforcement efforts concentrating on those who are not attempting to comply with the regulation.

The regulation would have extraterritorial reach, meaning that any AI system providing output within the European Union would be subject to it, regardless of where the provider or user is located. Individuals or companies located within the European Union, placing an AI system on the market in the European Union, or using an AI system within the European Union would also be subject to the regulation.

Endnote

The unique legal-ethical framework for AI expands the way of thinking about regulating the Fourth Industrial Revolution (4IR) which includes the coming of cutting-edge technology in the form of Artificial Intelligence, and applying the proposed laws will be a completely new experience.


From the first line of code, awareness is necessary for responsible, trustworthy AI. The future of our society is being shaped by the way we develop our technologies. Fundamental rights and democratic principles are important in this vision. AI impact and conformance evaluations, best practices, technological roadmaps, and conduct codes are essential tools to help with this awareness process. These technologies are used to monitor, validate, and benchmark AI systems by inclusive, multidisciplinary teams. Ex ante and life-cycle audits will be everything. The new European rules will forever change the way AI is formed. Not just EU, but in the coming days, other countries too would be in need to set-up a regulatory framework on AI and this GDPR would definitely guide them.

Comentarios


bottom of page