Samsung’s recent introduction of ChatGPT, an AI chatbot, to its employees was intended to streamline processes and boost its chip business. However, after just three weeks, three leaks of confidential semiconductor information have reportedly occurred, causing concerns about data security and confidentiality breaches.

Samsung chatGPT

The leaks happened when Samsung employees entered sensitive information, such as semiconductor equipment measurement data and source code, into ChatGPT. As a result, this information became a part of the AI’s learning database, accessible not only to Samsung but to anyone using ChatGPT.

The first leak happened when an employee at the Semiconductor and Device Solutions department entered source code related to the semiconductor equipment measurement database into ChatGPT to find a quick solution to an issue. The second leak occurred when another employee entered a code related to yields and optimization, and the third leak occurred when an employee asked ChatGPT to create meeting minutes.

Samsung has taken measures to prevent further leaks, including instructing employees to be cautious about the data they share with ChatGPT and limiting the capacity of each entry to a maximum of 1,024 bytes. The company has also clarified that once information is fed to the AI chatbot, it is transmitted to external servers where it cannot be recovered or removed.

This incident highlights the importance of data security and the need for companies to carefully consider the potential risks and benefits of introducing AI chatbots into their workplaces. While AI chatbots can improve efficiency and streamline processes, they also require proper safeguards and training to ensure the confidentiality and security of sensitive information.

RELATED: