A September 2022 article from Forbes reported on the benefits Artificial Intelligence (AI) will have on the future of healthcare. In March 2023, Goldman Sachs predicted that 300 million jobs could be diminished due to the ascendance of AI. Furthermore, in June 2023, a US lawyer apologised for using ChatGPT to prepare a court filing, which generated fake cases and rulings.

As AI becomes more pervasive, it brings both benefits and challenges that must be effectively addressed. Therefore, it is essential to establish robust frameworks of Governance, Risk, and Compliance (GRC) to ensure responsible and ethical AI practices.

 

Governance

Governance involves defining policies, procedures, and decision-making processes to guide the development, deployment, and operation of AI systems. Through the implementation of robust governance practices, organisations can promote transparency, mitigate bias, and ensure that AI technologies align with their values and objectives. Examples of governance policies include an acceptable use policy for AI, data security and confidentiality policies, an access control policy for authorised AI usage, and an AI ethics policy.

 

Risk Management

The advent of AI brings forth new risks that need to be identified, assessed, and managed appropriately. These risks may include biased decision-making, privacy breaches, security vulnerabilities, and regulatory non-compliance. For instance, it was widely reported in May 2023 that cyberattacks in India surged by 18% in the first quarter of 2023 due to the rise of AI tools such as ChatGPT. GRC frameworks can aid organisations in systematically analysing and mitigating these risks. By conducting risk assessments, implementing monitoring mechanisms, and establishing controls, businesses can minimise the potential negative impacts of AI and safeguard against legal, reputational, and operational risks.

 

Compliance

AI applications are subject to various legal and regulatory requirements. GRC frameworks helps align AI initiatives with data protection and privacy regulations (e.g. PDPA/GDPR), as well as regulations related to financial services, healthcare, consumer protection, and cybersecurity. Ensuring compliance with these regulations is paramount to avoid legal consequences and reputational damage to the organisation. A standard compliance check when using AI pertains to data privacy, whereby steps should be taken to protect personal data, including obtaining consent, anonymising data where necessary, and implementing appropriate security measures.

 

Data Privacy and Security

AI relies heavily on data, often involving sensitive and personal information. GRC frameworks provide guidelines and controls to ensure data privacy and security throughout the AI lifecycle. This includes data handling practices, secure storage and disposal, access controls, and compliance with relevant data protection regulations.

 

Accountability and Transparency

GRC practices promote accountability in AI development and usage. They emphasise the need for clear lines of responsibility, proper documentation, and sound auditability of AI systems. This enables stakeholders to understand and verify how AI systems operate, how decisions are made, and how potential biases or risks are addressed.

 

The implementation of GRC ultimately ensures that emerging or developing technologies, in this case AI, are developed and used in a responsible, ethical, and compliant manner while mitigating risks, enhancing accountability, and maintaining public trust in such systems.

 

To find out more about RSM's Technology, Media, & Telecommunications Practice, please contact our specialists:  

Adrian Tan
Partner & Industry Lead, Technology, Media & Telecommunications
[email protected] 
T +65 6594 7876

Hoi Wai Khin
Partner & Deputy Industry Lead, Technology, Media & Telecommunications
[email protected] 
T +65 6594 7880