The role of regulation for AI as it changes risk and compliance

AI

Moody’s Analytics recently dug deeper into the role AI regulation will play as the technology changes risk and compliance. 

Artificial Intelligence (AI) is poised to revolutionize the realms of compliance and risk management. With imminent laws on the safe application of AI technologies on the horizon, several firms are proactively developing policies that embrace responsible and ethical AI usage, anticipating future regulatory frameworks.

A comprehensive Moody’s study, encompassing feedback from 550 compliance and risk management leaders across 67 nations, reveals a strong consensus: nearly 70% believe AI will significantly influence their practices.

Despite this, AI’s integration into risk and compliance roles remains limited, though early adopters celebrate its positive impact, citing enhancements in manual process efficiency (17%) and staff performance (27%).

However, apprehensions persist among these leaders, chiefly concerning data privacy and confidentiality (55%), decision-making transparency (55%), and potential misuse or misunderstanding (53%). These concerns underscore the vital need for regulation to ensure artificial intelligence’s safe and responsible deployment.

The landscape of emerging regulation is varied and dynamic. In several countries, including the US, Europe, and the UK, regulatory laws are in developmental stages. China stands out as one of the few nations with finalized laws enhancing security around GenAI and establishing oversight agencies.

The US follows a voluntary approach with an AI risk management framework, emphasizing safety, security, privacy, equity, and civil rights, complemented by a White House-released AI Bill of Rights. The EU’s approach categorizes AI systems by risk, mandating assessment and reporting for high-risk systems. The UK is encouraging sector-specific regulatory measures by existing authorities.

Surprisingly, awareness of these regulatory efforts among Moody’s surveyed professionals is limited. Only 15% consider themselves well-informed, while a third claim complete unawareness. This contrasts starkly with the strong demand for new AI usage laws, a sentiment shared by 79% of respondents, accentuating the gap between regulatory developments and industry awareness.

Respondents urge regulators to prioritize data privacy and protection (65%), accountability (62%), and transparency (62%). They advocate for global consistency in regulations, demanding transparency and human oversight in AI-based outcomes. Regulations should be adaptable, recognizing artificial intelligence’s rapid evolution, and adopt risk-based and principles-based approaches to combat financial crime effectively.

Forward-thinking organizations are not idly waiting for regulations. Many are aligning their AI strategies with broader ethical and risk frameworks, understanding that forthcoming regulations will necessitate such policies. Responsible AI policies are now featuring accountability, demanding human validation for AI-influenced decisions, and focusing on transparency and explainability. They promote robust data governance and privacy protection, ensuring data access is appropriately controlled.

In the anti-financial crime domain, initiatives like the Wolfsberg Group’s five principles for artificial intelligence usage are emerging, emphasizing legitimacy, proportionate use, and expertise in AI applications. Despite these advancements, challenges remain in explaining AI-based decisions to regulators, determining acceptable levels of human involvement, and controlling explainability, privacy, and bias.

Read the full post here.

Keep up with all the latest FinTech news here.

Copyright © 2023 FinTech Global.

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.