By Julie Mungai, manager of attest services
In March of 2023, OpenAI took ChatGPT offline for a few hours due to a bug that exposed users’ chat histories. This breach revealed the histories of direct personal identifiers, such as first and last names, email addresses, and credit card information. Since then, OpenAI has announced a new feature that allows users to turn off their chat history and choose which conversations can be used to train ChatGPT models.
While allowing users to turn off chat history seems like a solution that’s heading in the right direction, the question is whether it’s enough to reduce the impact of a similar breach. Let’s discuss.
It’s important to consider what privacy standards AI tools like ChatGPT should be expected to conform to, however, it’s not always black and white when systems are designed with privacy as an afterthought. When you don’t build the foundations of privacy into your systems organically, a tool like ChatGPT is almost certainly going to be stuck playing catch-up, reacting to privacy risks and incidents as they occur—or as they become hot-button issues in the press. In this case, OpenAI could have implemented privacy controls from the start.
Even now that the feature has been implemented, the default setting is not privacy-preserving. Users are automatically opted-in to the default option, and unless they manually change it, their history is permanently retained and used to train the language model.
Here are a few more questions to consider regarding ChatGPT’s user experience and transparency:
Gradual and reactive privacy patching is never going to be elegant, and proactive privacy design is never going to yield perfection. While issues like the recent ChatGPT breach are best avoided by addressing privacy risks during the design phase, responding to and creating new privacy features does signal to consumers the intent to do the right thing.
Contact us for more information about how BARR can help you establish best privacy standards at your organization.
About the Author
As a Manager for BARR’s Attest Services practice, Julie Mungai brings extensive experience in performing internal controls audits, including business process and technology audits, for domestic and international clients in manufacturing, technology and pharmaceutical industries as well as compliance activities including attestation of services (SOC 1, SOC 2).
Before joining BARR, Julie gained five years of experience in risk assurance at PwC. Julie has a bachelor’s degree from Georgia State University, a master’s degree from New York University, and holds a CISA.