Monday - Friday, 9am - 6pm

Contact US
Blog, Artificial Intelligence, Cyber, Cyber Security Insurance Singapore, Risk Management Singapore

OPINION: Artifical Intelligence Regulation, Liability Risks and Insurance Protection

By Sarogenei K
Senior Director, Financial Risk Solutions,
Acclaim Insurance Brokers

Artificial Intelligence (AI) technology has grown rapidly with wide usage in many sectors.  More significant is the development of Generative AI tools, which can be used to create new content, including text, code, images, music, simulations, audio and videos. For instance in January 2023, just two months after ChatGPT (Generative AI tool), was launched, an estimated 100 million consumers became monthly active users, making ChatGPT the fastest-growing consumer application in history.1

AI Related Issues

There is no doubt that AI and Generative AI tools such as ChatGPT confer many benefits to businesses and individuals, but they are not perfect. Since AI learns everything from data, the result it gives will be reflective of the data it is trained on.  If the data AI is trained on has its own biases and imperfections, the result it gives will be biased and incorrect. There have been several instances of racial and gender biases, and misinformation (including creating convincing fake images and video known as deepfakes). One example of bias is a recruiting tool developed buy Amazon to streamline its hiring process.  It was found to be biased against female applicants because male-dominated resumes were used for the training.2

Two lawyers and a law firm in US were fined for submitting fake case laws generated by ChatGPT in a court filing.3 On the healthcare side, IBM’s AI system to help fight cancer is reported to have suggested a cancer patient with severe bleeding be given a drug that would cause the bleeding to worsen.4

AI technology has also given rise to data privacy and security risks as well as infringement of intellectual property rights. For instance, Samsung discovered in March 2023 that its employees acting in good faith had inadvertently breached company confidentiality and had compromised its intellectual property by using ChatGPT. 5  

This is how the incident happened:

  • One employee asked ChatGPT to optimize test sequences for identifying faults in chips, which is a confidential process.
  • Looking for help to write a presentation, another employee entered meeting notes into ChatGPT, putting confidential information for internal use into the public domain. 
  • Employees also pasted sensitive, bug-ridden source code from the company’s semiconductor database into ChatGPT in an attempt to improve it.
artificial intelligence and usage AI

Regulating the AI Industry

Issues such as these and others have called for governments to regulate the AI industry.  An article entitled “How countries around the world are trying to regulate artificial intelligence 6 , mentions that data from Stanford University’s 2023 shows that 37 bills related to AI were passed into law throughout the world in 2022.  The article also says that while some countries are implementing national regulations to monitor and keep the use and development of artificial intelligence in check some others are still grappling with this issue.

It is interesting to note that although SIngapore has not enacted any laws in relation to the use of AI in general, it had addressed specific applications of AI in two statutes:

  • The Road Traffic Act 1961 was amended in 2017 providing a regulatory sandbox for the trial and use of autonomous motor vehicles; and
  • Health Products Act 2007 (HPA) requires medical devices incorporating AI technology (AI-MD) to be registered before they are used.

In 2020, Singapore issued a voluntary set of guidelines – the “Model Artificial Intelligence Governance Framework” (Second Edition). According to this Model Framework, the use of AI should be fair, explainable, transparent and human-centric, which is pretty similar to the objectives of the EU AI’s Artificial Intelligence Act. In addition, in May 2022, Singapore implemented the “AI Verify”, a self-assessment framework comprising both technical tests and process checks.7 

Risk-Based Approach of EU’s AI Act

On the other hand, the European Union’s new Artificial Intelligence Act, the draft text of which was approved in June 2023, is reported to become the world’s first comprehensive legal framework for artificial intelligence.8 The Act takes a risk-based approach, classifying AI applications in four risk levels: “unacceptable risk,” “high risk,” “limited risk” and “minimal or low risk.” AI systems falling within the latter category are left to free use.  The “unacceptable risk” category includes social scoring, real-time and remote biometric identification systems such as facial recognition, and voice-activated toys that encourage children to act dangerously.  With some exceptions (e.g. “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval), AI systems in this category would be banned as they are considered a threat to people.

AI systems that are classified as high-risk consists of two categories:
i) AI systems that are used in products under the EU’s product safety legislation such as toys, cars and medical devices; and
ii) AI systems falling into eight specific areas  (e.g. biometric identification of people, operation of critical infrastructure and assistance in legal interpretation and application of the law).  AI systems in the first category should be assessed before being put on the market and also throughout their lifecycle.

The second category must be registered in an EU database.

Chatbots and AI systems that generate or manipulate image, audio or video content are classified as “limited risk”, are required to make users aware that they are interacting with a machine so that they could decide whether to continue. As regards Generative AI, like ChatGPT, the Act imposes transparency requirements:
i) disclosing that the content was generated by AI;
ii)designing the model to prevent it from generating illegal content; and
iii) publishing summaries of copyrighted data used for training. 9

It is reported that the Act would have extraterritorial effect like the General Data Protection Regulations, and would add to compliance costs for firms in South East Asia.  Singapore AI companies operating in EU or plan to expand there should consider investments into legal and compliance processes, obtaining licenses and certification and product research and development.

Legal Liabilities and Insurance

The AI industry including the users of AI tools should be mindful that legal liabilities can attach to them even in the absence of the EU Act, should an AI system fail or causes damage and/or financial loss. It may not always be easy to determine who exactly is liable for an AI system’s failure because many parties (designer, programmer, developer, data provider, manufacturer, user and the AI system itself) are involved in an AI system. The nature and cause of damage have to be established and expert legal advice may be necessary to ascertain who is responsible for the mistake(s). 10

Nevertheless, it is advisable for all parties involved in any AI system to take stock of their risk exposures and protect themselves with appropriate insurance coverage.  In this article, we will highlight some of the liability risk exposures to certain parties and the type of insurance in the Financial Risks Insurance category that can help protect or mitigate against their liabilities.

artificial intelligence this is how it happened

AI Designers and Programmers

AI designers and programmers may be considered as providing professional services.  They can face allegations of failure of an AI system or algorithmic bias, or intellectual property infringement.  Professional Liability insurance will help protect and mitigate against such risks by paying for any damages claimed as well as defence costs.

Board Members

AI systems heavily rely on large volume of data, which may include sensitive personal and corporate information. Unauthorised access, data leakage or misuse of the data can result in privacy breaches.  It is therefore imperative that Board members understand the potential for such AI-related risks including the rules and regulations governing them. They must put in place systems to safeguard data including employee training, and ensure that the AI systems are secure.  They must also ensure that their company is in compliance with AI-related rules and regulations.  Board members’ failure to do so can be considered to be in breach of their duties and expose them to personal liability.  Directors’ & Officers’ Liability Insurance will be of help in the event directors and officers face such allegations by providing for defence costs and also any damages that may be awarded.  However, it should be noted that violation of any rule or regulation can be excluded by this class of insurance.

Users

Some users like lawyers and medical practitioners can face professional liability if they advise wrongly based on their research results from AI tools such as ChatGPT, or using AI medical devices, which malfunction.  While Professional Indemnity Insurance (or Medical Malpractice Insurance) will protect them against such liability, it remains to be seen whether liabilities arising from the use of AI tools like ChatGPT would be excluded if they are not verified for accuracies.

In addition, it is crucial for businesses utilising AI technologies to have Cyber Security Insurance.  This class of insurance covers losses resulting from data breaches, cyberattacks, media liability, and other digital threats. The policy can help pay for costs and expenses related to incident response, regulatory fines, legal defence, and customer notifications.

Conclusion

AI technology raises complex issues with regard to obligations and liabilities of the various parties involved. Insurers can be expected to closely monitor the regulatory developments relating to AI (the EU Act and other local regulations in different jurisdictions) and decide whether the coverage they provide should be narrowed or widened.  Be that as it may, risk managers must take an active role in identifying AI related risk exposures in their businesses and take appropriate steps to avoid or minimise them.

 

References
wpChatIcon
Contact Us