OpenAI, the artificial intelligence research laboratory, had to take its popular ChatGPT bot offline on Tuesday for emergency maintenance after a user was able to exploit a bug in the system to view titles from other users’ chat histories. OpenAI announced its initial findings from the incident on Friday, revealing a deeper security issue that may have potentially revealed personal data from 1.2 percent of ChatGPT Plus subscribers, an enhanced access package that costs $20 per month.
Users posted screenshots on Reddit showing that their ChatGPT sidebars featured previous chat histories from other users. However, only the title of the conversation was visible, not the text itself. OpenAI took the bot offline for about 10 hours to investigate the issue. After the investigation, the company identified a security flaw in the Redis client open-source library, redis-py, which had been patched to prevent similar incidents in the future.
The OpenAI team confirmed on Friday that before taking the ChatGPT bot offline, some users were able to see another user’s first and last name, email address, payment address, the last four digits of their credit card number, and credit card expiration date. They noted that full credit card numbers were not exposed at any time. OpenAI has reached out to alert affected users of the incident and has taken additional measures to prevent such incidents from happening again in the future.
The company has added redundant checks to library calls, examined logs to ensure that all messages are only available to the correct user, and improved logging to identify when such issues arise and confirm that they have been resolved. Whether the company will face market-based repercussions similar to its competitors remains to be seen. In February, Google’s rival Bard AI committed a costly public faux pas when it incorrectly assured Twitter that the James Webb Space Telescope was the first telescope to image an exoplanet. Similarly, CNET recently revealed that it had used generative AI to write financial explainer posts before laying off a significant portion of its editorial team.