Skip to content Skip to sidebar Skip to footer

Widget HTML #1

AI Fights Money Laundering and Blocks Russian Oligarchs

The Challenge of Financial Crime

Banks and financial institutions are facing a growing threat from fraud and money laundering. At the same time, they are under increasing pressure to comply with stricter financial regulations. Despite spending up to 10% more each year in some advanced markets between 2015 and 2022, the financial industry only detects about 2% of global financial crime flows, according to Interpol.

This has led many to believe that artificial intelligence (AI) could play a crucial role in addressing these challenges. In Norway, a fintech startup named Strise has developed an AI platform designed to scan public registries and media reports for potential money-laundering risks in real time.

Streamlining the KYC Process

The AI system is specifically tailored to help financial institutions, such as banks, insurance companies, and payment services, verify new account applications in line with European anti-money laundering legislation. This process is part of the Know Your Customer (KYC) requirements, which aim to ensure that clients are properly identified and their sources of funds are verified.

Traditionally, KYC checks have been time-consuming and labor-intensive, requiring compliance analysts to sift through databases, corporate filings, and news reports to confirm ownership, trace connections, and identify potential risks. These checks are essential to prevent criminals from using legitimate banks to move illicit funds.

However, this manual process is slow and expensive. "Now you can have AI that retrieves information and puts it together in a whole new way," said Marit Rødevand, co-founder and CEO of Strise, during an interview with Euronews Next.

Detecting High-Risk Entities

Strise's AI system is capable of identifying warning signs such as links to sanctioned individuals, high-risk jurisdictions, or politically connected figures who may be vulnerable to corruption. Analysts using the system can spot red flags on individuals on sanction lists or politicians who may be "highly influential" or "more susceptible to corruption" and "money laundering."

For example, the system can flag a possible Russian oligarch's ownership in a company portfolio. "Once you have that information, you can choose from a portfolio level whether or not you want to complete that onboarding with the calculated risk classification," said Robin Lycka, a solution architect at Strise.

In another instance, the platform identified an Estonian-based company linked to two individuals involved in one of the largest cryptocurrency frauds in history, worth $560 million (480 euros).

Enhancing Efficiency and Accuracy

The platform can also generate reports and summaries of its findings, using large language models (LLMs) to compile risk narratives for regulatory filings. This task previously required hours of manual writing.

Rødevand expressed hope that AI could shift the focus from mere checkbox compliance to more effective prevention of financial crime. "There are so many cases in the media and personal stories about lives being devastated by these types of crimes. And I truly want us to help change that," she added.

Regulatory Developments and Considerations

The European Union is finalizing a sweeping Anti-Money Laundering Authority (AMLA) in Frankfurt, along with an EU-wide directive set to take effect in 2027. This initiative aims to combat money laundering and the financing of terrorism.

Stanislaw Tosza, an associate professor in Compliance and Law Enforcement at the University of Luxembourg, highlighted that the reform introduces a "new area of responsibility." He noted that the expanding scope of anti-money laundering (AML) obligations, combined with the increasing risk of sanctions for non-compliance, makes AI an attractive tool for financial institutions managing these responsibilities.

Tosza also emphasized that under EU data protection law, human oversight is required when automated systems make decisions that significantly affect people.

Reducing False Positives

Strise claims that its customers have reduced false positives—when a system flags something as suspicious even though it is legitimate—by "30 to 40 per cent with automated customer monitoring." This means fewer hours spent by analysts reviewing unnecessary alerts, allowing them to focus on real risks and fighting financial crime.

Lars Lunde Birkeland, Strise CMO, stated that "this means far less manual work for analysts who would otherwise spend hours reviewing unnecessary risk alerts rather than catching real risk and fighting financial crime."

However, experts caution that while automation can reduce false positives, it may also make errors harder to detect or contest. "The integration of AI into these decision-making processes further reduces transparency: it may become even more difficult for affected individuals to understand the basis for such evaluations or to challenge them effectively," Tosza said.