AI-generated deepfakes in crypto have emerged as the most alarming threat to the ecosystem, with a projected US$25 billion in losses by the end of 2024. In the first half of 2024, crypto scammers siphoned US$679 million in user funds. This figure is closely in line with the last two quarters of 2023, showing scammers have no plans of slowing down.
“Pig butchering” is a form of these scams that has proven very successful. Amongst many other examples, in October 2024, Hong Kong authorities uncovered an operation where deepfake technology was used to create fake romantic profiles, eventually leading victims into fraudulent crypto investments.
For context, over 15 billion AI-generated images were created in a single year, equating to 150 years of photography. While this is a staggering development, it has become a Pandora’s box of digital fraud that poses a serious threat to all digital transactions.
AI deepfakes in crypto: A new era of crime
With minimal technical knowledge, AI can now generate hyper-realistic deepfake content. As a result, it is becoming increasingly hard to distinguish between legitimate and deceptive media.
Deepfakes are particularly concerning in crypto for two reasons:
- Sybil attacks: Criminals can exploit AI-generated identities to create multiple fake identities, manipulating blockchain governance systems and consensus mechanisms. These attacks harm the fundamental trust of decentralised systems.
- Bypassing KYC: Deepfakes enable bad actors to appear as legitimate users and bypass Know Your Customer (KYC) checks, allowing them access to financial services using false identities.
The computing power for AI training has quadrupled year on year for the past decade, accelerating the sophistication of deepfakes to a point where this technology is accessible to anyone, including bad actors. These scammers can now create convincing videos impersonating influential figures promoting fake crypto schemes, as seen with Elon Musk.
This rapid progression, paired with the potential for misuse highlights the critical need for solutions that ensure these powerful technologies are developed and used responsibly.
Also Read: Cross-chain interoperability: The key to unlocking crypto’s true potential
Tokenised identity: A defence against deepfakes
The rise of deepfakes has exposed vulnerabilities in traditional security systems, particularly those reliant on outdated KYC and anti-fraud measures. As these defences struggle to keep pace, tokenised identity emerges as a powerful solution.
Tokenised identity leverages blockchain technology to verify and authenticate identities. Against deepfakes, tokenised identity is routed in three principles; dynamic verification, source validation and immutable records. This multi-layered verification approach targets the vulnerabilities exploited by AI deepfakes.
At the forefront of these innovations is the implementation of face comparisons for continuity checks and liveness detection systems. Unlike traditional static checks, like uploading your passport photo, this new dynamic check will ask you to do something in real-time, like move your camera closer to your head.
An advanced deepfake scammer could present a convincing fake photo and fool the system. However, with dynamic verification, the task of fooling the system becomes more challenging. Systems that can prove genuine human interactions are the first step.
The next step of the process extends beyond facial recognition to include source verification to ensure individuals have explicitly consented to the use of their likeness and validate the authenticity of their identity claim. This prevents bad actors from using an Elon Musk deepfake to open a crypto account, even if the video is perfect and bypasses dynamic verification, as they don’t have Elon’s real ID or actual permission. By utilising tokenised identities that prove both consent and authenticity, businesses can establish a powerful defence against deepfakes.
The final step of tokenised identity when combating deepfakes is the tying of digital assets to verified individuals through immutable privacy-preserving blockchain records. This ensures that once an identity is verified, its proof cannot be tampered with or duplicated when it is used in on-chain use cases.
For businesses concerned about protecting their platforms and users, these enhanced identity checks become crucial custodians against unauthorised access and identity theft.
Implementing a new standard of security
Across the industry, identity verification is not approached as an exact science. Instead, it’s thought of as a matrix of signals evaluated in aggregate, of which ID document verification is just one potential component.
Businesses should choose an appropriate level of identity verification based on their specific use case and risk tolerance. In other business use cases where a high degree of certainty is required, checks such as knowledge-based authentication, re-verification at the time of transaction, and document verification may easily be added and help combat AI deepfakes.
When it comes to beating AI, in-person verification (physically verifying a person compared to their ID) is the ultimate indicator that a person is the individual they claim to be.
Conclusion
AI deepfakes represent a US$25 billion threat that crypto can no longer afford to ignore. To protect users and restore trust in the ecosystem, the industry must embrace solutions like tokenised identity. These tools can ensure authenticity, safeguard privacy, and enable a more secure digital future.
Collaboration between the blockchain and AI industries will be essential to developing strong measures against deepfake fraud. Action must be taken now so that risks can be mitigated and the full potential of decentralised technology can be unlocked.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Join us on Instagram, Facebook, X, and LinkedIn to stay connected.
Image credit: Canva Pro
The post US$25 billion lost: Crypto’s deepfake defence is failing appeared first on e27.