Insights from a data scientist: Implementing AI in financial crime compliance

Insights from a data scientist: Implementing AI in financial crime compliance

The financial services industry is undergoing a transformation driven by artificial intelligence (AI) technologies, particularly machine learning (ML). These advancements are significantly enhancing anti-money laundering (AML), counter-financing of terrorism (CFT), and sanctions screening as part of customer lifecycle management (CLM).

To explore the possibilities and practical applications of these AI-driven solutions, Moody’s consulted a data scientist in the banking sector. Nuray Yücesoy, MSc in Big Data Analytics and Management at BNP Paribas, engaged with Moody’s Industry Practice Lead, Francis Marinier, and Senior Solutions Specialist, Nicolas Pintart, discussing AI’s role in financial crime detection and prevention.

The full discussion can be found here, but here are six key insights from that discussion.

1. AI Implementation in the Banking Sector

Nuray Yücesoy discussed the conservative nature of compliance groups within the banking sector and the challenges of adopting new technologies.

“Fear of new practices resulting from AI and a lack of detailed guidance from regulators pose hurdles. Despite regulators encouraging new technologies, like AI, their stance isn’t fully detailed. Uncertainty in this area poses risks for early adopters who are eager to progress but who may experience uncertain outcomes and approvals related to their innovations.”

2. Concerns Around AI in Compliance Control Environments

Francis Marinier emphasised that AI and ML success relies on data quality and completeness, which impact model accuracy and reliability. Improving data processes and integrating external sources are crucial.

Nicolas Pintart highlighted ethical concerns and bias in AI models, which can undermine fairness, transparency, and accountability in financial crime compliance.

Nuray Yücesoy noted concerns about AI causing a lack of transparency in decision-making and auditing. However, she believes transparency is achievable through explainability and proper event logging, as outlined in the ACPR Governance of Artificial Intelligence in Finance.

3. Advice for Engaging with AI-Driven Financial Crime Solutions

Nuray Yücesoy emphasised the importance of strategic data management for effective AI-driven compliance. She highlighted the need for scalable infrastructures and strong governance to ensure data quality and integrity. Yücesoy also stressed that AI should not operate in isolation, noting that AI initiatives must align with broader regulatory and compliance frameworks, with tools that are transparent and explainable to regulators.

Additionally, Yücesoy pointed out the necessity for AI models in financial crime solutions to continuously evolve to respond to emerging threats and changing regulations. This requires ongoing training, fine-tuning, and validation. By focusing on these areas, compliance professionals can harness the potential of AI to create more robust and responsive compliance environments, integrating both technological and human elements effectively.

4. Preventing Bias in AI Models and Systems

Nicolas Pintart explained that bias in AI can originate from various sources, including biased training data, flawed model assumptions, and the subjective nature of human decision-making. He emphasised that if training data is not carefully sourced and curated, it can contain historical and statistical biases that AI models may inadvertently learn and perpetuate. Therefore, meticulous data collection is critical to the success of these models.

Nuray Yücesoy added that bias often stems from the data used to train AI models, reflecting historical inequalities or present-day disparities. These biases can lead to discriminatory outcomes or unfair treatment of certain groups, potentially exacerbating social inequities. However, she pointed out that these issues can be addressed by designing solutions that do not rely on information such as gender, age, or nationality. Yücesoy advocated for adopting transparent AI methodologies to facilitate trust and highlighted that supervised models dependent on historical decisions could carry higher risks of bias compared to semi-supervised or unsupervised models. By proactively addressing bias, financial institutions can harness AI power in compliance fairly and inclusively.

5. A Data Scientist’s Aspiration for AI in Compliance

Nuray Yücesoy sees AI, especially ML technology, as redefining financial crime compliance by offering faster, more effective, and consistent solutions. She highlights the immense potential benefits for regulatory compliance and societal well-being, emphasising the need for collaboration and transparency to sustain model explainability, innovation in use cases to improve crime prevention and detection, and a commitment to ethical, auditable decision-making.

6. Preventing Bias in AI Models

Nuray Yücesoy emphasises the need for high-quality data in AI-driven compliance. She advocates for comprehensive, granular, and frequently updated data solutions that enhance decision-making and seamlessly integrate with existing systems through APIs and modular solutions.

Yücesoy also stresses the importance of transparency and explainability in AI systems, with clear documentation accessible to both data scientists and non-technical stakeholders. She calls for continuous support from data providers to navigate new challenges and adapt to regulatory changes, ensuring compliance professionals are well-equipped for the future.

Read the full story here.

Keep up with all the latest FinTech news here.

Copyright © 2024 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.