Skip to main content Skip to footer Skip to menu

Six Ways To Navigate the Customer Service Risks of ChatGPT

Before the ChatGPT hype, another generative AI swept the internet: AI image creators.

Like ChatGPT, AI image generators create outputs from natural language inputs. A user can ask for a painting of a mountain range in the style of Van Gogh and get an impressive result instantly:

Also like ChatGPT, AI image generators such as OpenAI’s DALL-E 2 are trained on billions of parameters scraped from the web. Recently, this point came under legal review when a trio of artists launched a lawsuit.

They argue that AI image generators need consent from the original creators of its scraped data. A similar lawsuit is also underway against an AI programming model. The case’s representation describes it as “another step toward mak­ing AI fair and eth­i­cal for every­one.”

While the validity of such arguments is certainly up for debate, legal and business experts largely agree that there is no clear answer as to who “owns” AI-generated work. 

“I believe this is one of many lawsuits that will shape the case law that will define how copyright laws should operate in the era of AI,” said Replicant CEO and Co-founder Gadi Shamia. “Depending on how ChatGPT will be monetized, the next wave will follow right after.’

And from ChatGPT itself: “The question of copyright eligibility is a complex one. It’s ultimately up to courts and legal experts to determine whether or not ChatGPT’s output can be protected under copyright law. In the meantime, it’s important for users of the technology to be aware of the potential legal implications of using it to generate original content.”

Regardless of generative AI’s legal future, customer service leaders planning to use ChatGPT for their own purposes must proceed with caution. For the time being, these tips provide a framework for doing just that:

Understand the Capabilities

We’ve already covered the capabilities and limitations of ChatGPT in contact centers. But the overview is this: Large Language Models (LLMs) provide impressive conversations, but unreliable results. LLMs are trained on a dataset largely irrelevant to enterprise customer service, with human-in-the-loop oversight that doesn’t take into account customer service. In addition, ChatGPT is not capable of connecting with internal systems, understanding organizational workflows, or resolving customer requests. LLMs can provide plenty of support for common back-office tasks, but customer service isn’t one of them. 

Don’t Mortgage Your Future

While it’s easy to dream on the future capabilities of ChatGPT, its limitations mean that contact center leaders shouldn’t be centering their digital roadmaps around it anytime soon. “Regardless of how much humans come to depend on iterations of AI generation over the next months and years, it is probably a correct prediction that legal experts across the spectrum will weigh in and courts will see it come across their dockets,” says Priori Legal

Know Your AI Options

LLMs are trained on a massive corpus of text (essentially the entire internet). This means a lot of what’s inside the “black box” of data is potentially offensive or problematic. While ChatGPT includes some level of human oversight to prevent such content from showing up in responses, there are no guarantees. On the other hand, deterministic models of conversational AI use a set amount of scripts to speak with customers. This means everything that’s said can be reviewed and pre-approved by your legal and business teams.

Check For Plagiarism

Though ChatGPT may not be ready for customer service, it can be a great tool for training conversational AI models or helping with back office work. When doing so, it’s important to confirm that nothing you publish outwardly is, as FindLaw puts it, “substantially similar to existing copyrighted works”. While ChatGPT may be doing the writing, it still needs a human to edit any content it creates.

Know Your Industry

Having a human in the loop also applies to reviewing content for inaccuracies and defamatory or misleading statements. This can be doubly important in certain industries. A financial or healthcare contact center, for example, may choose to generate pages of training data for a customer service chatbot. But letting a few sentences slip by that make inaccurate financial or health statements could spell catastrophe in a chatbot down the line. When reviewing any LLM-generated content, always lean on your own compliance and subject matter experts.

Protect Data at Every Turn

If you’re hoping ChatGPT can assist your team in summarizing, synthesizing, or iterating off of existing data, ensure that data never includes personal information. Contact Center Automation solutions have robust security and compliance, as well as redacted transcriptions that never reveal PII in any setting. ChatGPT, however, does not. An input of PII can easily turn into an output of PII, which immediately compromises customer data whether the administrator realizes it or not.

design element
design element
Request a free
call assessment
Schedule a call with an expert