ChatGPT’s New Chat History Feature Leads to Greater Privacy Questions

June 23, 2023 | AI, Privacy

By Julie Mungai, manager of attest services

In March of 2023, OpenAI took ChatGPT offline for a few hours due to a bug that exposed users’ chat histories. This breach revealed the histories of direct personal identifiers, such as first and last names, email addresses, and credit card information. Since then, OpenAI has announced a new feature that allows users to turn off their chat history and choose which conversations can be used to train ChatGPT models. 

While allowing users to turn off chat history seems like a solution that’s heading in the right direction, the question is whether it’s enough to reduce the impact of a similar breach. Let’s discuss.

It’s important to consider what privacy standards AI tools like ChatGPT should be expected to conform to, however, it’s not always black and white when systems are designed with privacy as an afterthought. When you don’t build the foundations of privacy into your systems organically, a tool like ChatGPT is almost certainly going to be stuck playing catch-up, reacting to privacy risks and incidents as they occur—or as they become hot-button issues in the press. In this case, OpenAI could have implemented privacy controls from the start. 

Even now that the feature has been implemented, the default setting is not privacy-preserving. Users are automatically opted-in to the default option, and unless they manually change it, their history is permanently retained and used to train the language model.

Here are a few more questions to consider regarding ChatGPT’s user experience and transparency: 

  • As an AI program that relies on data to fine-tune its machine learning algorithms, how meaningful and relevant will its responses be if there are a significant number of opt-outs? And for those who choose to select the privacy-preserving option, is there a difference in the user experience? 
  • Is it important for the language model to maintain the sequence of prompts and history of the data keyed in by the user in order to work effectively? While the new feature is a win on the privacy front, OpenAI has not been transparent on whether ChatGPT will continue to provide intelligent responses to users that opt to restrict training data.
  • Regardless of history being disabled, data is still retained for a maximum of 30 days and reviewed on an as-needed basis. How will the reviews be conducted? How are those ‘as-needed’ instances identified? Transparency and visibility have become an expectation from consumers. If self-regulation is going to be promoted and implemented, we need checks and balances to ensure that AI companies are not just making empty promises, but actually following through.

Gradual and reactive privacy patching is never going to be elegant, and proactive privacy design is never going to yield perfection. While issues like the recent ChatGPT breach are best avoided by addressing privacy risks during the design phase, responding to and creating new privacy features does signal to consumers the intent to do the right thing.

Contact us for more information about how BARR can help you establish best privacy standards at your organization.

About the Author

As a Manager for BARR’s Attest Services practice, Julie Mungai brings extensive experience in performing internal controls audits, including business process and technology audits, for domestic and international clients in manufacturing, technology and pharmaceutical industries as well as compliance activities including attestation of services (SOC 1, SOC 2). 

Before joining BARR, Julie gained five years of experience in risk assurance at PwC. Julie has a bachelor’s degree from Georgia State University, a master’s degree from New York University, and holds a CISA.

 

Let's Talk