Employees routinely copy and paste personal data into ChatGPT
07/10/2025 | LayerX
New research by the Internet browser monitoring security company Layer X reveals that artificial intelligence (AI) has rapidly become the single largest blind spot in data security for modern enterprises. Based on real-world telemetry data, the 2025 Enterprise AI and SaaS Data Security Report findings show that AI has surpassed traditional enterprise software in both adoption speed and risk profile, with the majority of usage happening outside corporate control.
AI adoption is at a breakneck pace, with 45% of enterprise employees already using generative AI (GenAI) tools, achieving in less than three years what traditional software categories took over a decade to accomplish. This usage is highly concentrated, with a striking 92% of all enterprise AI activity occurring within a single platform, ChatGPT, which has become the de facto standard for enterprise AI. AI now accounts for 11% of all enterprise browsing activity, rivalling categories that have long defined workplace productivity.
The primary method for sensitive data exfiltration is surprisingly not file uploads, but the simple copy/paste function. The report found that 77% of employees paste data into GenAI prompts. While 40% of files uploaded to GenAI tools contain personal information or Payment Card Industry (PCI) data, the majority of sensitive data movement occurs through the clipboard. Employees are, on average, making 14 pastes per day into non-corporate accounts, with at least three of these activities containing sensitive data. Consequently, GenAI tools alone account for 32% of all corporate-to-personal data transfers.
A core finding is that identity controls are largely ineffective in the AI category. A substantial 67% of AI usage occurs through unmanaged personal accounts. Furthermore, 71% of Customer Relationship Management (CRM) logins and 83% of Enterprise Resource Planning (ERP) logins are non-federated, meaning the critical systems housing sensitive customer and financial data are being accessed without robust corporate identity controls. This demonstrates that governance is virtually non-existent, leaving enterprises effectively blind in their fastest-growing technology category. Traditional Data Loss Prevention (DLP) tools, which are built around file-centric monitoring, are unable to register the copy/paste activity, further exacerbating the security gap.
Training Announcement: Freevacy offers a range of independently recognised professional AI governance qualifications and AI Literacy short courses that enable specialist teams to implement robust oversight, benchmark AI governance maturity, and establish a responsible-by-design approach across the entire AI lifecycle. Find out more.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.