Skip to main content Skip to footer Skip to menu

The ChatGPT Glossary for CX Leaders

ChatGPT and Large Language Models (LLMs) have put AI on the radar of every customer service leader. In addition, they have introduced an entirely new set of words and phrases to the CX vernacular.

Here’s a glossary of terms you need to know to understand, evaluate and deploy LLMs in your contact center:

Key LLM Terms

Large Language Models (LLMs): A type of AI model that can understand and generate human-like language. LLMs are trained to understand the past 40 years of all data on the internet, then analyze and process it faster than humans in order to produce natural responses.

Natural Language Processing (NLP): A field of computer science focused on making interactions between computers and human language more natural and intuitive.

Natural Language Understanding (NLU): A subfield of NLP focused on enabling computers to understand human language in a way that is similar to how humans understand language.

Natural Language Generation (NLG): A subfield of NLP focused on what to say back. NLG generates a free form response from a free form question.

Generative AI: A type of AI that uses machine learning models to generate new content, such as text, images, music or videos, that is similar to examples it was trained on. LLMs are a subset of Generative AI.

Pre-training: The process of training an LLM on large amounts of text data before fine-tuning it for a specific task.

Training Set: A set of examples used to train an LLM model, typically consisting of input-output pairs that are used to adjust the model’s parameters and optimize its performance on a specific task.

Fine-tuning: The process of adapting an LLM to a specific task by training it on a smaller dataset that is specific to that task.

Transformer: A neural network architecture used in many LLMs, including GPT-3, that enables efficient and effective language processing.

GPT-3, GPT-4:: Generative Pre-trained Transformer 3, a widely known and powerful LLM model developed by OpenAI, with fine-tuned models GPT-3.5 and GPT-4 following. 

ChatGPT: ChatGPT is OpenAI’s accessible app for GPTs 3 through 4. The app puts LLMs into consumer hands to allow for dialogue-based interaction to create new text based content. 

Prompt: A user’s text input that initiates and guides language generation by an LLM.

Key CX Terms

Entity: A piece of information (e.g., text, item, number) that a machine needs to extract from a sentence to inform decisions and resolve requests. Collecting a set of entities is referred to as slot filling.

Intent: The nature of a customer’s request that must be extracted from their natural utterance (e.g. “I want to send an order back” is categorized as Return Request). The intent informs the machine’s next steps.

Multi-intent Recognition: LLMs enable Replicant’s Thinking Machine to recognize multiple requests from a single utterance (e.g., “I need to update my card on file and change my address), which significantly decreases Average Handle Time. 

Zero-Shot Learning: A technique used in LLMs that allows the model to generate outputs for tasks it has not been explicitly trained on.

Human-in-the-Loop: A design approach that involves incorporating human feedback and oversight into AI systems to improve their effectiveness. Unsupervised learning causes conversations and predictions to go wrong.

Hallucination: A problem inherent to LLMs where the machine makes up words or actions if it doesn’t know what to do from the information in the knowledge base.

Toxic Reply: An inappropriate response generated by an LLM, often the number one reason why LLMs can’t be connected directly to customers. 

Prompt Engineering: The process of guiding LLMs to generate accurate and relevant outputs by creating high-quality prompts.

Adversarial Examples: Inputs that have been intentionally designed to mislead an LLM or other AI system.

Bias: In the context of LLMs, bias refers to systematic errors or inaccuracies in language generation or understanding that result from the model’s training data.

Explainability: The ability to understand how an LLM arrived at a particular output or decision.

Key Terms for LLM-Powered Automation 

Contact Center Automation: A hybrid approach that uses AI and LLMs to create a customer-centric contact center that efficiently serves customers at scale while elevating agents to focus on the most complex and nuanced issues.

Thinking Machine: Replicant’s Contact Center Automation brain which serves millions of customers across every channel and allows them to speak naturally and fully resolve issues with no wait. 

Application Programming Interface (API): A set of protocols that specify how software components should interact with each other. APIs can connect LLMs with existing platforms like Contact Center Automation to enhance their performance.

1-Turn Problem Capture: The Thinking Machine’s ability to accurately capture several issues in a single turn of the conversation to increase completion rates and speed to resolution.

Contextual Disambiguation: The Thinking Machine’s ability to be fully aware of nuanced and complex differences in callers’ varied answers determine their meaning (e.g., “You got it,” means “Yes”).

Dynamic Conversation Repair: The Thinking Machine’s ability to seamlessly repair conversations when callers change their mind or need to correct previously provided information. 

Intelligent Reconnect: The Thinking Machine’s ability to automatically call, chat or SMS customers back and pick up where they left off when a conversation drops for any reason.

Few Shot Learning: The Thinking Machine’s ability to decipher intents from thousands of unique phrases using only a few training examples, significantly decreasing how long it takes to deploy Contact Center Automation. 

Dialogue Policy Control: Replicant’s coded set of rules that ensures accuracy and control over scripts, workflows and actions the LLM follows, and prevents hallucinations and toxic responses. 

Enterprise-grade: Replicant’s advanced expertise gained from automating 10M+ conversations over the past 6 years which has created prompt engineering best practices and methods to rigorously evaluate and leverage LLMs.

Prompt Injection Prevention: A prescribed set of workflows and scripts that filter and safeguard against sharing sensitive information with a third-party LLM. 

LLM-Agnostic: The Thinking Machine’s ability to leverage any LLM based on security and performance to ensure contact centers receive the best LLM available. 

Guardrails: A set of rules and preventative measures that allows LLMs to be used to improve the performance of Contact Center Automation without connecting customers directly to third-party platforms. 

design element
design element
Request a free
call assessment
Schedule a call with an expert