This post is also available in: עברית (Hebrew)
According to a new study, 15% of employees regularly post sensitive company data into ChatGPT– putting their employers at risk of a security breach.
The research report, titled “Revealing the True genAI Data Exposure Risk”, analyzed the behavior of over 10,000 employees and examined how they use generative AI apps in the workplace. The findings concluded that at least 15% of workers use these tools at work, and almost 25% of these times include a data paste into the app.
According to Cybernews, this behavior seems to be quite common and is gradually increasing, with workers pasting sensitive data on a weekly and sometimes daily basis. The numbers provided in the report will only grow as the popularity of AI-based tools increases.
“Soon, we predict, employees will be using GenAI as part of their daily workflow, just like they use email, chats (Slack), video conferencing (Zoom, Teams), project management, and other productivity tools,” LayerX stated in a 10-page report.
This phenomenon poses significant risks to organizations concerning the security and privacy of sensitive data. Furthermore, the report states that the top categories of confidential information being input into the GenAI tools are 43% internal business data and 31% source code, which pose the highest exposure risks.
The study also found that a significant portion of these workers do not rely solely on instructions and prompts, but also paste data directly into the app, which exposes sensitive company data. “Organizations might be unknowingly sharing their plans, product, and customer data with competitors and attackers,” LayerX stated.
ChatGPT currently boasts 800 million active users per month.