Samsung employees have recently made headlines after allegedly sharing confidential company data with ChatGPT, a chatbot created by US startup OpenAI. This has raised concerns about the security of data shared with AI-powered chatbots and the risks of leaks.
The leak, as reported by a South Korean business news outlet, happened only 20 days after the company lifted a ban on ChatGPT. Ironically, the ban was initially put in place to avoid any data leaks.
Confidential Data Leaked on Three Occasions
The leaked data included the source code of software that measures semiconductor equipment, according to sources. Reportedly, a Samsung worker discovered an error in the code and queried ChatGPT for a solution. The employees reportedly shared the restricted equipment data with the chatbot on two separate occasions, and on another occasion, an excerpt from a corporate meeting was also sent to the chatbot.
OpenAI’s Warning Against Sharing Sensitive Information
OpenAI, the creators of ChatGPT, explicitly warn users not to share any sensitive information in conversations with the chatbot. This is stated in the company’s frequently asked questions (FAQ) section. The information that users directly provide to the chatbot is used to train the AI behind the bot.
ChatGPT’s Security Concerns
Privacy concerns about ChatGPT’s security have been mounting since OpenAI revealed that a flaw in the bot exposed parts of users’ conversations with it, as well as their payment details in some cases. This has led the Italian Data Protection Authority to ban ChatGPT, while German lawmakers are also considering following in Italy’s footsteps.
Tech Rivals Launching Intelligent Chatbots
The release of ChatGPT has sparked a race in the tech industry to develop intelligent chatbots. Google has launched its ChatGPT rival, Bard, while Chinese tech giant Baidu has unveiled its chatbot, Ernie Bot. Both have received mixed reviews from early adopters.
The alleged leak by Samsung employees has highlighted the importance of ensuring the security of data shared with AI-powered chatbots. As the use of chatbots increases, it is crucial that companies take adequate measures to safeguard their sensitive information from data breaches. It remains to be seen how regulators will respond to the security concerns raised by ChatGPT and other chatbots.