Experts have warned that employees are putting sensitive corporate data at risk by providing it to ChatGPT, the popular artificial intelligence chatbot model.
Cyberhaven Labs researchers analyzed the use of ChatGPT by 1.6 million workers across industries and found that 5.6% of them used it in the workplace, with 4.9% providing company data to build the chatbot’s knowledge base.
While employees believe ChatGPT can improve productivity, it can lead to confidential data leaks. Some companies, such as JP Morgan and Verizon, have blocked access to ChatGPT to prevent the risk of such leaks.
The report by Cyberhaven Labs revealed that less than 1% of employees were responsible for 80% of leaks caused by sharing company data with ChatGPT.
However, with the integration of the technology in multiple services, such as through the use of ChatGPT API, this percentage could rapidly increase.
The researchers also warned that enterprise security software cannot monitor the use of ChatGPT by employees, making it difficult to prevent the leak of sensitive or confidential data. At the same time, employees were copying data out of the chatbot more often than they were pasting company data into ChatGPT at a nearly 2-to-1 ratio.
Researchers also found that the average company leaked sensitive data to ChatGPT hundreds of times each week, such as confidential documents, client data, and source code.
It is important to spread awareness about the risks of improper use of such technology, and companies must educate their employees on how to properly use ChatGPT without risking confidential data leaks.
ChatGPT itself is not inherently risky for company data security as it doesn’t have the ability to store or acquire company or personal data.
However, if confidential or sensitive information is shared on the chatbot platform, it could be vulnerable to potential security or privacy threats associated with online communication.