Is data privacy at risk with ChatGPT?

ChatGPT Privacy Concerns: Is Your Personal Data at Risk?

ChatGPT has become the talk of tech town. Within four months of release, ChatGPT has managed to garner more than 100 million users. People can’t stop talking about its way of providing information and users boast about the experience being similar to interacting with a friend who knows it all.

Though ChatGPT offers a wide range of solutions to shortcomings within the tech-world and more, it’s important to understand that with the benefits, come risks.

ChatGPT is the result of methodically curated consumer data

ChatGPT is an AI-backed language platform, trained in huge data sets so that it can help users with their queries. The more data the model is fed, the better it gets at anticipating one’s questions, generating plausible answers, and identifying patterns.

OpenAI, the company which introduced ChatGPT, fed the model more than 300 billion words, all of which were present on the internet in the form of books, research papers, articles, press releases, websites, and discussions.

Potential ChatGPT data risks

While the fact that ChatGPT was trained on huge chunks of data was made known to the public, OpenAI has not yet released any statement regarding the authenticity of those datasets and their sources. Without verification, using ChatGPT leaves the platform vulnerable to a few data risks including:

  1. Privacy risks: ChatGPT may hold onto some personal information during interactions with the common public. If this data is not properly secured or protected, it could be vulnerable to hacking or data breaches, thereby compromising the privacy of many users.
  2. Bias risks: AI models like ChatGPT are only as good as the data they are trained on. If the training data is biased or altered in any way, it can lead to ChatGPT producing biased or unfair responses.
  3. Misinformation risks: ChatGPT generates responses based on the input it receives, but if that input is incorrect or contains false information, ChatGPT could generate misinformation.
  4. Malicious use risks: ChatGPT could be used for malicious purposes, such as generating fake news, phishing emails, or impersonating individuals to undertake fraudulent and illegal activities.

ChatGPT used data without consent

While ChatGPT has a wide range of knowledge, the sea of data it was fed also included personal information of millions of people across the globe, with none of them giving consent. It’s important to note that when asked for a more personal inquiry, ChatGPT will respond with this phrase: “As an AI language model, I do not have access to any private or confidential information. Additionally, OpenAI, the organization responsible for creating and managing the GPT models, has strict ethical guidelines and practices in place to ensure user privacy and data protection.” While this is promising, it’s not overlooked that data was obtained without consent, giving many potential users and non-users some sort of pause.

Skeptics can agree that the credibility of the data corpus that has been given to the tool is at the very least, questionable. Information obtained from ChatGPT cannot be reliable all the time as the data available online isn’t always accurate. The platform itself discloses that most of the data set it works off of is up until 2021, with some of the latter years sprinkled in. So, the most reliable, up-to-date information may not be widely known in ChatGPT yet.  If employees decide to completely rely on ChatGPT for information concerning their area of work without fact-checking from their end, unpredictable complexities and risks can arise for the same, so it’s crucial to explore a human plus digital strategy.

Should contact centers use ChatGPT for customer engagement?

As with most AI-powered tools, they are created to help – especially in areas of limited resources. Customer experience is no exception. Customer experience is dependent on the quality of customer engagement a contact center agent provides after few or many interactions. Any kind of engagement is only possible if the agent is aided with the required amount of data complemented with accuracy. The information ChatGPT highlights is dependent on the information it is trained on and decision-based algorithms that were used to train it. Nonetheless, it is essential to consider the fact that ChatGPT is unacquainted with several fragments of data that may not have been present on the internet. Additionally, it may be acquainted with the data that has changed over time and needs updating.

It’s safe to say that contact center agents should not completely rely on the answers given by ChatGPT.  Agents should continue to research and upskill (in addition to using ChatGPT) in order to maintain high-quality customer engagement.

While contact center owners are keen on making the most of ChatGPT, confidentiality and data privacy concerns must be brought into consideration as with any technology. There is a possibility that employees will innocently share information relating to the business of a third party, technical or commercial classified data, or proprietary information while conversing with the tool.

Additionally, ChatGPT retains information provided in conversations, and data such as IP address and location will be stored. It is natural for unaware users to give away some private information without them even realizing the consequences. Therefore, it is imperative for contact center employers to outline in policies information about deploying specific guardrails around how and which information from ChatGPT should actually be used for improving customer engagement.

Recent blog posts:

ChatGPT Evolution
6 Ways Data Analytics is Enhancing Customer Experience
Enhancing CX with Experts-on-Demand
Social Media Reputation Management

HGS