Samsung Electronics, the South Korean technology giant, has prohibited the use of generative AI tools such as ChatGPT by its employees, after discovering that workers uploaded sensitive code to the platform. The company said data transmitted to AI platforms including Google Bard and Bing is stored on external servers, making it difficult to retrieve and delete, and could end up being disclosed to other users.
Samsung conducted a survey last month about the use of AI tools internally and said 65% of respondents believe that such services pose a security risk. The company has previously had to deal with the inadvertent leak of internal source code via ChatGPT.
The rules ban the use of generative AI systems on company-owned computers, tablets and phones, as well as on its internal networks. They do not affect the company’s devices sold to consumers, such as Android smartphones and Windows laptops.
Samsung has asked employees who use ChatGPT and other tools on personal devices not to submit any company-related information or personal data that could reveal its intellectual property. Breaking the policies could result in dismissal, Samsung warned.
Samsung is creating its own internal AI tools for translation and document summarisation as well as for software development, and is working on ways to block the upload of sensitive company information to external services. ChatGPT added an “incognito” mode last month that allows users to block their chats from being used for AI model training.
Until security measures are in place to ensure a safe environment for using generative AI to enhance productivity and efficiency, the company is temporarily restricting the use of generative AI.
The move by Samsung is in line with similar actions taken by banks and other companies, which have banned or restricted the use of ChatGPT over privacy and security concerns.