The author of this insight is pursuing law at National Law University, Odisha and a former research intern at the Indian Society of Artificial Intelligence and Law.
OpenAI CEO Sam Altman was fired from its Board of Directors for a short spell in November 2023. Along with him, another member of the Board, Grog Brockman, was also fired. Both the spokespersons of OpenAI and these two people refused to provide any reasons for this when they were reached out to. However, it came to light that several researchers and staff of OpenAI had written a letter to the Board, before the firing, warning of a powerful artificial intelligence discovery that they said could threaten humanity.
OpenAI was initially created as a non-profit organisation whose mission was “to ensure that artificial general intelligence benefits all of humanity.”[1] Later, in 2019, it opened up a for-profit branch. This was a cause of concern because it was anticipated that this for-profit wing will dilute the original mission of OpenAI to develop AI for the benefit of humanity and will rather act for profit, which can often lead to non-adherence to ethical growth of the technology. Sam Altman and Grog Bockman were in favour of strengthening this wing while the other 4 Board members were against giving too much share power to it and instead, wanted to stick to developing AI for human benefit rather than to achieve business goals.
It was cited by OpenAI that Sam Altman was not consistent with his communication with the rest of the Board regarding the development of a long-anticipated breakthrough - Q* AI model which is a Artificial General Intelligence (AGI) model that can surpass all existing AI developments and can achieve tasks and goals way beyond what we can imagine AI to do currently. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
It was presumed that CEO Altman had knowledge of advanced developments regarding AGI within OpenAI. It was reported in the media that he had concealed facts from the Board of Directors, leading them to firing him. With several debates going around about this in the AI community, including within OpenAI employees who protested against his firing, Altman’s position was restored and structural changes within OpenAI were suggested that includes the involvement of people like Satya Nadella.
But the concern is that, with its advanced abilities, Q* AGI could be problematic for being opaque.
Any presumed form of AGI would naturally work on reinforcement learning at least based on the current scientific knowledge we have. In machine learning (ML), a drawback is the vast amount of data that models require for training. The more the complexity in the model, the more data it requires. Still, the data may not be reliable. It may have false or missing values or may be collected from untrustworthy sources. Reinforcement Learning overcomes the problem of data acquisition by almost completely removing the need for data. Reinforcement learning is a branch of ML that trains a model to come to an optimum solution for a problem by taking decisions by itself. This feature is what gives AGI its strong capabilities. It has vast computing capabilities which can help it to predict most future events and phenomena including the fluctuations in stock and investment markets, advanced weather conditions, election outcomes and much more. It can use its mathematical algorithms and other complex technical elements to foresee how the human mind will think and traverse, thus gaining outcome knowledge and taking decisions that can alter and influence many things.
One of the biggest apprehensions behind the development of Q* technology was that it could pose a cybersecurity risk to national governments and their classified data. Can QUALIA crack Advanced Encryption Standards (AES) which uses bitcoin-like blockchain and encryption technology to protect classified information and documents? How can major cyber security breaches affect us? How to safeguard against such cyber security breaches? These are some questions one must consider.
Some Considerations
We depend a lot on encryption for securing our data. It might be perceivable that the encryption safeguards that we rely on keep our data secure. That is not true entirely. As discussed before, tech like Q* has the ability to break AES as well. It may have accomplished a feat that was once considered impossible, to break modern encryption. The recent LLaMA leak on 4chan suggests that Q* can solve AES-192 and AES-256 encryption using a ciphertext attack. With AES compromised, the entire digital economy can fall apart. Government secrets, healthcare data, banking data, etc., can be exposed. The NSA has previously shown interest in breaking encryption through their Project Tundra which is similar to the alleged capabilities of Q*. This raises questions about the ethical implications of such AI advancements by state and non-state actors.
Recommendations
Standards and Certifications
There needs to be implementation of mandatory legislation that requires nations and specific organizations to have minimum cyber security standards in place. There should be a self-regulatory set of standards to help organizations develop their cyber security measures.
States must establish a Computer Incident Response Team, a national Network and Information Systems (NIS) authority and a national NIS strategy. Companies must adopt state of the art security approaches that are appropriate to manage the risks posed to their systems. Another element of standards and certification can be a regulation on a set of standards for electronic identification and trust services for electronic transactions.
Regulation of Encryption standards
Sensitive data can be protected by ensuring efficient encryption measures. Data must be classified based on its sensitive nature and significance. Investing on equally encrypting all types of data is unnecessary. More sensitive data requires a greater and stronger level of encryption with added layers of security. When and how encryption should be applied is a major consideration here. A multi-factor authentication is recommended to add an extra layer of security, even if an attacker gains access to encrypted data keys.
It is recommended that the best industry practices are adopted while doing so. End-to-End encryption is a best practice to protect data throughout its entire lifecycle, from creation to storage to transmission.
Strong and widely accepted encryption algorithms that are fully updated must be used and there should be periodic checks of upgradation requirements. Conducting regular audits and assessments is necessary and there must be a supervisory body that ensures these regular checks.
Regulating reinforcement learning
Regulation of reinforcement learning by AI must be done which involves establishing guidelines and frameworks in order ensure responsible and ethical use.
Transparency in the development and deployment of RL algorithms is crucial. RL developers should create a manual which provides information about the algorithms' goals, training data, and decision-making processes. This should be done, especially where RL is used in critical applications that can affect society.
Liability mechanisms must be in place for holding developers and organizations accountable for the actions of RL algorithms. Frameworks must be developed for comprehensively defining the rights and liabilities in case losses occur and harm is caused by RL-based AI systems.
When personal data of individuals are involved, privacy concerns emerge. Measures must be implemented to ensure compliance with data protection regulations and safeguard user privacy.
Since billions of people are currently using AI tools in various aspects of their lives, it is necessary for them to have a basic knowledge about RL technology. Definitely policymakers, developers, and the general public must understand the benefits and potential risks associated with RL so that they can make informed choices about their use and create effective policies.
It is a good idea to collaborate with international organizations and regulatory bodies to establish consistent global standards for RL.
Cyber security insurance
Cyber insurance is important with the development of AI because there will be a new set of risks that tradition insurance policies may not cover. Some risks can be data breaches, property damage, business interruption, or even physical harm to humans. It is quite unpredictable what kind of risks AGI models may pose. Also, malicious actors will misuses these models who may try to steal, corrupt, or manipulate them for their own purposes and insurance must definitely cover resulting losses. AI systems may also fail unintentionally due to faulty assumptions, design flaws, or unexpected situations that may produce unsafe or undesirable outcomes. This is another area where cyber insurance can help cover the costs and liabilities associated with these potential failures and provide guidance and support for preventing them.
Legal regulation
Existing legal instruments may not be enough to cover and address the risks that will accompany security breaches by AGI. Integrating AI security requirements into existing data protection laws is necessary and every national parliament must develop and committee consisting of AI and legal experts who will draft stringent laws to prevent any cyber security breach by AGI models once they come to use.
Comentarios