Posted on

Kaspersky: Deepfakes emerge as a top cybersecurity concern for 2026

The rise of deepfakes has evolved from a fringe technological curiosity to one of the most pressing cybersecurity concerns heading into 2026, according to new predictions from Kaspersky. As AI adoption accelerates across the Asia Pacific (APAC), the region is becoming both a proving ground for innovation and a testing arena for increasingly sophisticated cyber threats.

With 78 per cent of professionals in APAC using AI at least weekly, compared with 72 per cent globally, the scale and speed of adoption are amplifying the risks associated with synthetic content, forcing businesses and governments to rethink digital trust and resilience strategies now. For business owners and policymakers, this means prioritising AI risk assessments and embedding deepfake awareness into national and corporate cybersecurity roadmaps.

Deepfakes are no longer limited to manipulated videos of public figures; they are becoming a mainstream technology encountered by employees, consumers and organisations alike. Kaspersky notes that awareness of deepfake risks is growing, with companies increasingly training staff to recognise synthetic content and reduce the likelihood of fraud.

As deepfakes appear in more formats—video, images, voice and text—they are becoming a “stable element of the security agenda,” requiring structured policies rather than ad hoc responses. Leaders should respond by formalising internal training programmes, updating incident response plans and mandating verification processes for sensitive communications.

The threat is compounded by rapid improvements in deepfake quality and accessibility. While visual deepfakes are already highly convincing, Kaspersky predicts major advances in realistic audio, a key enabler of voice-based scams and impersonation fraud. At the same time, the barrier to entry is falling sharply, with non-experts now able to generate mid-quality deepfakes in just a few clicks.

Also Read: AI’s biggest bottleneck isn’t intelligence but fragmentation: i10X co-founder

This democratisation of creation tools means cybercriminals no longer need advanced skills to launch convincing attacks at scale. To counter this, organisations should invest in multi-factor authentication, out-of-band verification, and stricter approval workflows for financial and executive-level requests.

Efforts to label AI-generated content are expected to intensify in 2026, but progress remains uneven. There is still no unified or reliable system for identifying synthetic content, and existing labels can be easily removed or bypassed, particularly in open-source environments. As a result, Kaspersky anticipates new technical and regulatory initiatives aimed at addressing the challenge, though enforcement will lag behind innovation. Policymakers should collaborate across borders to establish minimum standards for AI content labelling, while businesses should not rely solely on labels and instead adopt layered detection and verification controls.

More advanced forms of deepfakes, such as real-time face and voice swapping, will continue to evolve, even if they remain tools for technically skilled attackers. While widespread use is unlikely in the near term, Kaspersky warns that risks will grow in targeted scenarios, including executive fraud, espionage and political manipulation. Increasing realism and the use of virtual cameras will make these attacks harder to detect and more persuasive. High-risk organisations should conduct threat modelling for targeted deepfake attacks and limit the public exposure of executive audio and video data wherever possible.

The growing use of open-weight AI models is also blurring the line between legitimate and malicious applications. As these models approach the capabilities of closed systems in cybersecurity-related tasks, they offer more opportunities for misuse due to weaker safeguards. At the same time, AI-generated phishing emails, fake websites, and synthetic brand assets are becoming increasingly indistinguishable from legitimate content, especially as companies themselves adopt AI in their marketing and communications. Businesses must strengthen brand protection, monitor for impersonation and educate customers on official communication channels to reduce fraud risks.

“Attackers are using it to automate attacks, exploit vulnerabilities, and create highly convincing fake content,” said Vladislav Tushkanov, research development group manager at Kaspersky. “At the same time, defenders are applying AI to scan systems, detect threats, and make faster, smarter decisions.”

Also Read: The ASEAN AI rush: Why “move fast and break things” is a dangerous strategy for risk

For the APAC region, the stakes are particularly high. “Asia Pacific is setting the global pace for AI adoption,” said Adrian Hia, managing director for Asia Pacific at Kaspersky. “This momentum is creating tremendous opportunity, but also redefining how cyber threats emerge and scale.”

As deepfakes cement their place as a top cybersecurity concern of 2026, resilience will depend on preparation rather than reaction.

Kaspersky recommends regular data backups, isolated from networks, and the use of advanced security platforms to detect and neutralise complex threats. These steps, which policymakers and business leaders alike must champion, are crucial to safeguarding trust in an AI-driven economy.

The lead image in this article is AI-generated.

The post Kaspersky: Deepfakes emerge as a top cybersecurity concern for 2026 appeared first on e27.