Posted on Leave a comment

AI doesn’t fix broken risk systems; it exposes them: SEON’s Tamas Kadar

Tamas Kadar, co-founder and CEO of SEON

Across Asia Pacific, AI is already embedded in fraud prevention and anti-money laundering (AML). Most organisations use it daily and trust it; yet, many still cannot connect the outputs into a single, coherent view of risk.

For Tamas Kadar, co-founder and CEO of SEON, an AI command centre for fraud prevention and AML compliance, that gap defines the current phase of the industry.

“AI is already well established,” he says. “The issue is what happens when those outputs need to support decisions across the full lifecycle.”

Also Read: Building an anti-scam ecosystem is the key to a safer digital future

In many companies, onboarding, transaction monitoring, screening, and investigations still sit in separate systems. AI improves individual workflows, but without integration, it does not improve decision-making overall. The result? Fragmented visibility.

The real question is no longer whether companies use AI, but whether they can turn it into decisions that are consistent, explainable and fast enough for real-time environments.

Integration is becoming a growth advantage

The divide shows up most clearly in growth.

Higher-growth companies are far more likely to have integrated systems. Kadar stops short of claiming direct causation, but the pattern is consistent: companies that scale quickly tend to treat integration as core infrastructure.

“Fraud and AML complexity rises quickly as a business grows,” he says. “The companies that scale best integrate early, so complexity does not become a drag.”

Disconnected systems create friction. Teams spend time reconciling data, decisions lack context, and accountability becomes unclear. Integration reduces that friction and allows businesses to expand without letting fraud losses or compliance bottlenecks spiral.

What “unified” actually means

“Unified” is often used loosely. In practice, it means building a shared backbone.

Fraud and AML teams need access to the same customer context, decision logic, and audit trail. Risk signals, from behaviour to transactions, must feed into one system so AI can understand relationships between them, not just make isolated judgements.

This is difficult to implement. System complexity, talent shortages and incompatible data remain major barriers.

In Southeast Asia, the challenge is amplified by market variation. Payment systems, regulations and fraud patterns differ widely. A workable model is not a single rigid system, but a consistent core with local flexibility layered on top.

The biggest mistake: automating too early

Many companies move fast on AI but skip a critical step.

“The biggest mistake is automating before defining decision ownership,” Kadar says.

Also Read: Why AML compliance is becoming proptech’s biggest opportunity in 2026

Buying multiple tools or automating weak processes are symptoms of the same issue: unclear decision-making structures. Without clarity on how decisions are made and reviewed, automation simply accelerates confusion.

This becomes more serious as companies expand. Fraud and AML decisions need to be explainable, especially when they affect customers and compliance obligations across multiple markets.

AI does not remove that responsibility. It makes it more urgent.

When speed turns into operational debt

Startups often prioritise speed and patch systems later. In fraud and AML, that approach can break down quickly.

Operational debt becomes dangerous when temporary fixes start influencing high-stakes decisions: customer access, financial exposure or regulatory compliance.

The warning signs are straightforward: teams jumping between dashboards, different departments working from conflicting data, and leadership lacking a clear view of risk. At that point, the system is no longer supporting growth. It is slowing it.

There is also a timing problem. Fraud evolves quickly, but many systems are slow to deploy or adapt. Delays increase both costs and exposure to risk.

The challenge is not choosing between speed and structure. It is building systems that can do both.

AI is changing work, not replacing it

Despite expectations, AI has not significantly reduced headcount in fraud and AML. Instead, it has changed the nature of work. Detection has improved, but the overall workload has increased. More users, more transactions and greater regulatory scrutiny have expanded the scope of operations.

AI acts as a force multiplier. It supports analysis and decision-making, but humans remain essential for oversight, interpretation and accountability.

Most organisations still favour human-in-the-loop models. AI assists, but final judgement stays with people.

Accountability cannot be outsourced to AI

As AI becomes more involved in decision-making, responsibility becomes harder to define.

Kadar is clear: accountability does not sit with the model. It sits with the system around it. That includes data quality, decision rules, governance processes and leadership choices. When something goes wrong, the issue is not the algorithm alone, but the broader control environment.

Vendors must provide transparency. Teams must monitor outcomes. Leaders must ensure systems prioritise accountability, not just speed.

The uncomfortable truth

The industry’s biggest misconception is that AI fixes operational problems.“The uncomfortable truth is that AI exposes weak operations faster than it fixes them,” Kadar says.

Poor data, unclear ownership and disconnected systems become more visible when decisions accelerate. Without a solid foundation, AI simply amplifies existing issues. That is why AI adoption in fraud and AML is not just a technology decision. It is an operating one.

Also Read: Asia’s new cyber threat: AI that speaks your language

Companies that benefit most are not those with the most tools, but those with the strongest foundations: clean data, clear processes and governance that can scale.

Without that, AI does not create clarity. It creates faster confusion.

The post AI doesn’t fix broken risk systems; it exposes them: SEON’s Tamas Kadar appeared first on e27.

Leave a Reply

Your email address will not be published. Required fields are marked *