Posted on Leave a comment

Stop sending humans to an AI gunfight

Don’t let AI give you a false sense of security.

If you look at regulated industries across Southeast Asia—Singapore, Malaysia, Indonesia—the regulators all say the same thing: you must do proper due diligence on your third parties. And rightfully so. Most financial services firms even have dedicated teams for this—privacy, ops, financial risk, cyber.

But here is the reality: those teams are now facing the fact that their vendors are using AI to complete their compliance reports and security assessments. Some are even fabricating assurance reports like SOC 2.

And while the vendors are using AI to speed things up, the people who actually have to secure the relationship are still doing things manually.

It’s a crazy idea. The number of vendors is growing rapidly because, like it or not, in an interconnected world, working with partners is inevitable. Yet, TPRM teams are overwhelmed, understaffed, and stuck in the dark ages of manual review.

Sure, there are tools that scan digital assets from the outside, but most of the time they just deliver false information. They can’t see behind the firewall. And what about the vendors with no digital presence? You can’t scan a physical process.

Also Read: Digital Growth, fragile defences: Inside Philippines’s cybersecurity gap

So what do we do? We send questionnaires. And then some poor analyst has to spend days, weeks, or even months reading every single line to match it against internal policies. And then—the crazy part—they have to do it all over again every single year.

It is time that AI faces AI

In 2026, forcing teams to manually review AI-generated documentation is not just inefficient, it is a structural weakness. The volume, speed, and variability of AI-assisted outputs have already outpaced human-only review models.

The shift that needs to happen is straightforward. Machines should handle pattern recognition, document parsing, and baseline control mapping at scale. Humans should focus on judgment, context, and challenge. That means interrogating inconsistencies, understanding operational realities, and identifying where assurances do not match actual risk.

This is not about removing people from the process. It is about restoring their role to where it actually matters.

Because the real risk is no longer just whether a control exists on paper. It is whether anyone can still tell if that paper reflects reality.

And in a world where AI can generate compliance at scale, trust will depend less on what is submitted and more on how rigorously it is verified.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Stop sending humans to an AI gunfight appeared first on e27.

Leave a Reply

Your email address will not be published. Required fields are marked *