AI is no longer a pilot project in financial services. That was the clear message from the 2026 BAFT International Trade & Payment Conference, where a panel on AI in compliance and fraud detection focused less on experimentation and more on execution.
According to Quantifind, moderated by BNY senior vice president of global payments and trade Ryan Lastra, the discussion brought together Quantifind vice president of strategic client partnerships Teresa Buechner and BNY senior vice president of domestic payments Sumner Francisco to explore how institutions must now govern, explain and collaborate around AI in an increasingly networked risk environment.
The debate has moved on from whether AI belongs in payments and compliance. It is already embedded across transaction monitoring, fraud detection and case management workflows. Instead, the panel centred on what comes next: how firms govern their models, how they ensure explainability, and how they work across institutional boundaries as financial crime becomes more interconnected. The era of speculative AI use cases has ended. The operational phase has begun.
Buechner pointed to tangible results already being delivered by AI-driven systems. In some cases, machine learning has been used to rescore legacy alerts, eliminating nearly 70% of false positives and transforming hundreds of thousands of alerts into a far smaller, higher-risk subset for human review. But the panel stressed that optimising performance within a single institution is no longer sufficient. Financial crime risk does not sit neatly within one bank’s perimeter. It spreads across networks of accounts, counterparties and payment flows. A customer may appear low risk in isolation while being closely connected to high-risk actors elsewhere in the system.
Fraudsters understand this dynamic well. They exploit the gaps between institutions, moving funds across multiple entities to evade detection. That reality reframes AI not as a competitive differentiator alone, but as a collective defence tool.
Both Buechner and Francisco emphasised that AI should be viewed as an augmentation layer rather than a replacement for human judgement. Machine learning can identify patterns at scale, and generative AI can accelerate investigative workflows, but institutions still define risk appetite and make final determinations. However, this augmentation is only as strong as the data it can access. When banks operate in silos, even advanced models are constrained. When signals are shared securely and within regulatory boundaries, AI’s effectiveness increases dramatically.
Interestingly, the panel suggested that the infrastructure for such collaboration already exists. Payment systems, including those operated by the Federal Reserve and The Clearing House, function as natural aggregation points for transactional activity. These shared rails could form the foundation for broader, ecosystem-level risk intelligence. Rather than each institution learning the same lessons independently, the industry could identify coordinated fraud earlier and respond more consistently.
Regulation, the panel argued, is not the primary barrier. Concerns around opacity are. Supervisors are less focused on whether AI is deployed and more concerned with whether its outputs can be explained and defended. Strong model governance, traceability and transparency are prerequisites for innovation. Institutions that can clearly articulate how their models work are better placed to collaborate and to engage regulators confidently.
The arms race with fraudsters is already being fought at a network level. Criminals deploy AI to test defences, probe systems and adapt tactics rapidly. Without shared intelligence and continuous retraining of models, each bank bears the cost of relearning the same threats in isolation.
Looking ahead, the panellists described the coming years in terms of convergence and renaissance. Fraud, AML, cyber and payments risk are increasingly intertwined. AI is becoming the connective tissue linking these domains. Yet an additional theme emerged from the discussion: community. The next phase of AI in financial services will not be defined by who builds the best standalone model, but by who combines robust governance, human oversight and cross-institution collaboration to strengthen the financial ecosystem as a whole.
Copyright © 2026 FinTech Global









