Posted on Leave a comment

The use of GenAI is turning innocent employees into insider threats: Here’s how to fix it

Does your team use GenAI tools to review contracts or other sensitive documents?

If you answered yes, you’re not the minority. It seems harmless enough — you paste company text into ChatGPT, type “Help me review this,” and within seconds, you have an analysis of a confidential document.

It feels fast, easy, and harmless. Yet, many do not realise that they have just uploaded confidential corporate data into a public AI model, now beyond your organisation’s control.

This scenario is anything but theoretical. A 2025  report notes that nearly 1 in 20 enterprise users regularly use GenAI tools, and internal data sent to these platforms has surged 30 times year‑on‑year. The same report found that 72 per cent of this shadow AI use, or employee use on personal accounts, occurs outside IT’s purview.

Crucially, this isn’t about bad actors; it’s about convenience. Employees are simply trying to work smarter. But in the process, they’re unwittingly pivoting into insider threats, leaking data outside detection, under the watch of traditional security systems.

The GenAI-driven insider threat landscape

GenAI tools introduce new risks beyond data copy-paste. Prompt injection attacks, where hidden commands are embedded in documents or queries, can co-opt these systems into revealing confidential info or ignoring security protocols. There are real-world exploits like University of California, San Diego’s (UCSD) Imprompter, which had nearly an 80 per cent success rate in extracting personal data via obfuscated prompts.

The risks are compounded when employees unknowingly expose sensitive information like API keys, login credentials, or confidential files in GenAI platforms. Once that data is retained or intercepted, attackers can exploit it to impersonate trusted users and access corporate systems undetected. In such cases, traditional security tools often fail to flag the activity because the access appears legitimate and the data flows may traverse encrypted channels.

Also Read: Bridging the gender gap in GenAI learning: Strategies to get more women involved

Why traditional security alone isn’t enough

Network-level defences like Data Loss Prevention (DLP) and behavioural analytics (such as User and Entity Behaviour Analytics, or UEBA) are vital parts of a layered security strategy. These software tools monitor activity across the network and applications, scanning for risky behaviour like large data exports or unusual file access patterns. They can flag when an employee uploads sensitive files to unsanctioned cloud platforms or external GenAI tools.

But there are limitations. Many rely on visibility into network traffic or sanctioned applications. But when employees upload sensitive documents into public GenAI platforms, these actions can easily bypass logging and monitoring — especially if traffic is encrypted or routed through personal accounts. And in cases where credentials are compromised, attackers can operate from within, circumventing network protections entirely.

A critical missing puzzle piece lies with elevated security, where data lives in the memory of the endpoint.

Layering hardware-based zero trust into GenAI risk management

This is where hardware-level zero-trust comes in, and I’m not talking about passive security like encryption or key management. Encryption is essential for protecting data at rest, and effective key management ensures only authorised parties can decrypt that data. But neither prevents a legitimate user or a GenAI tool with granted access from reading and exfiltrating sensitive information.

Dynamic hardware-level zero trust moves beyond passive safeguards, enabling organisations with:

  • Continuous validation of access attempts at the chipset or SSD level
  • Anomaly detection for abnormal data reads/writes, including large transfers or mass deletions
  • Autonomous lockdowns that block suspicious activity before data leaves the device

Imagine an employee, unaware of the risks, pastes sensitive login credentials or confidential documents into a public GenAI platform to “streamline” a task. Those details are now retained in the AI model or intercepted by threat actors exploiting vulnerabilities in the platform. Later, hackers use the leaked credentials to access corporate systems and attempt to siphon large volumes of sensitive data.

Also Read: GenAI in lending: Faster approvals, smarter risks, and personalised credit

Traditional security tools might miss this, especially if the attackers use the compromised credentials to operate under the guise of a trusted insider. Network monitoring could also be bypassed if the data exfiltration happens over encrypted channels or through sanctioned apps.

Dynamic hardware-level security, however, can detect unusual access patterns — like mass file transfers or abnormal read/write activity– at the physical layer. It does not rely on user credentials or network visibility. Instead, it autonomously blocks the suspicious transfer before any data leaves the device, effectively neutralising the threat even after the breach of access credentials.

Building a GenAI-aware insider threat strategy

To circumvent this threat, a multilayered strategy beyond traditional network security is critical:

  • Governance and AI-ready policy: Define which AI tools are approved, specify allowed data types, and require employee attestation.
  • Education and culture: Many employees may not be aware of the dangers associated with feeding GenAI tools sensitive data. It’s important to empower them with the right literacy and clear guidelines so AI can be an ally, not an adversary.
  • Hardware-level endpoint security: Equipping drives with embedded zero-trust capabilities provides the final defence, autonomously detecting and preventing unauthorised data movement at the most fundamental layer.

Fix the problem, don’t ban the tool

The goal is not to choke out innovation by banning GenAI; it is to make it as safe as possible. A sample playbook could look like:

  • Approve a selected set of GenAI services
  • Configure DLP and behavioural tools to watch for large data exports
  • Enforce intelligent hardware-secured storage on all endpoints
  • Train staff on what data should not be shared and why

In the GenAI era, employees are usually well-intentioned, not malicious. Yet, without proper safeguards, they can unintentionally act as insider threats. Bridging governance, training, network monitoring, and hardware-based zero-trust turns GenAI into a secure asset rather than a hidden vulnerability.

Security needs to follow the data to the drive, because that’s where the invisible line between productivity and exposure is drawn.

Are you ready to join a vibrant community of entrepreneurs and industry experts? Do you have insights, experiences, and knowledge to share?

Join the e27 Contributor Programme and become a valuable voice in our ecosystem.

Image courtesy: Canva

The post The use of GenAI is turning innocent employees into insider threats: Here’s how to fix it appeared first on e27.

Leave a Reply

Your email address will not be published. Required fields are marked *