Samsung employees may have leaked sensitive company data to ChatGPT

Samsung employees have some of the strictest privacy and secrecy parameters of any tech company. Like Apple, the company wants to keep its sensitive data under wraps. New products, company assets, and development coding are valuable things to be protected and safeguarded.

Estimated reading time: 2 minutes

But it doesn’t stop there. The data you share with ChatGPT could be shown to other users if it fits their inquiries. It’s evident that these three Samsung employees did not read the fine print before using ChatGPT to perform some of their tasks.

The report came from The Economist Korea and was spotted by Mashable; according to Engadget, one employee reportedly asked the chatbot to check sensitive database source code for errors, another solicited code optimization, and a third fed a recorded meeting into ChatGPT and asked it to generate minutes.

According to Engadget, Samsung learned about the security problem and has implemented a 1024-character limit to prompts in ChatGPT. The Samsung employees are being internally investigated for the incident, and reports say Samsung is working on its own AI chatbot to prevent future security breaches.


Source link