Samsung investigates as three employees allegedly leak sensitive data to ChatGPT

Samsung investigates as three employees allegedly leak sensitive data to ChatGPT

Samsung is reportedly investigating three employees who allegedly leaked confidential information to the popular chatbot, ChatGPT. According to reports, the employees used the chatbot to perform tasks such as checking sensitive database source code for errors, soliciting code optimization, and generating meeting minutes. This security slip-up prompted Samsung to limit the length of employees’ ChatGPT prompts to a kilobyte and to build its own chatbot to prevent future mishaps.

However, the incident highlights a broader concern regarding the use of chatbots such as ChatGPT. As the chatbot’s data policy states, unless users explicitly opt-out, it uses their prompts to train its models. OpenAI, the owner of ChatGPT, urges users not to share secret information with the chatbot in conversations as it’s “not able to delete specific prompts from your history.” The only way to remove personally identifying information from ChatGPT is to delete the user’s account, which can take up to four weeks.

While chatbots like ChatGPT can be useful tools for a range of work tasks, it’s crucial to remember that anything shared with them could be used to train the system and appear in its responses to other users. As such, businesses should be cautious when using chatbots to ensure sensitive information is not shared inadvertently.

No votes yet.
Please wait...