Skip to main content Skip to footer Skip to menu

Replicant Labs: How We Safeguard Contact Centers From LLM Risks

The Replicant Labs series pulls back the curtain on the tech, tools, and people behind the Thinking Machine. From double-clicks into the latest technical breakthroughs like Large Language Models to first-hand stories from our subject matter experts, Replicant Labs provides a deeper look into the work and people that make our customers better every day. 

Benjamin Gleitzman, CTO & Co-founder, Replicant

ChatGPT has placed the power of Large Language Models (LLMs) directly into the hands of users. 

Now, every consumer has access to a vast information source where they can ask any question they like, in whatever manner they choose, and get answers immediately.

As a branch of generative AI, LLMs are a milestone in the decades-long history of neural network advancements. They represent a central part of the field’s growing capabilities and applications.

However, LLMs inherently harbor the countless biases found in the billions of texts (essentially the entire internet and most written works) leveraged to train their simulated grasp of language and thought. 

Answers from LLMs have the veneer of objectivity – the responses are confident and grammatical, even in the face of uncertainty and questionable provenance.

Both the imaginator and the ideator, LLMs produce factual statements mixed with hallucinations, half-truths, and even wholly inaccurate or downright inappropriate responses – all of which make LLMs unfit to directly help customers receive service.

The line between “plausibly true” and “actual reality” is blurring, as shown by this strikingly real AI-fabricated image of Pope Francis.

The line between “plausibly true” and “actual reality” is blurring, as shown by this strikingly real AI-fabricated image of Pope Francis.

Replicant is the safest, fastest, and most comprehensive way to bring the transformative power of LLMs like ChatGPT to customers, contact center agents, and businesses. So if LLMs are truly a watershed moment for customer service, how does Replicant guarantee certainty and unlock their potential while protecting enterprise-level contact centers from their limitations?

A Symphony of Substance

Not every LLM is right for customer service, and not every customer question needs an LLM to be resolved.

At Replicant, LLMs are just another tool integrated into the Thinking Machine’s Language Layer, along with a wide variety of powerful in-house and external AI models and systems used to listen, think, and speak.

You can think of our Language Layer as a conductor in an orchestra. The AI models, including LLMs, are like musicians, each with unique strengths and weaknesses. Ultimately, the conductor ensures they all come together to follow the sheet music – your rules, your workflows, and your brand voice.

When done right, the result is a captivating performance: a great conversation for your callers that’s worthy of applause.

The Large Language Model Whisperer 

Large Language Models can exhibit quirky behavior – lil’ changes in punctuation (!) or grammar …and even spacing… all which can have a gnarly impact on the quality of the answers. 

(If an LLM read that last sentence, it may have a very different answer than if the exclamation point, ellipsis, and slang were removed.) 

At Replicant, we’ve developed methods of guiding, constraining, and evaluating LLMs to achieve the best results.

While ChatGPT is a household name, there are a growing number of LLMs out there. Some are good at step-by-step reasoning. Others are quicker to run for simple tasks. Replicant is LLM-agnostic: we test them all and choose the best model for each type of conversation, along with fallbacks in the case of outages.

We are also experts at prompt engineering, which ensures the most accurate output from each LLM. Relying on data from tens of millions of our calls, webchats, and texts which comprise countless resolved, bread-and-butter customer service scenarios, Replicant maintains comprehensive evaluation datasets to test new prompts, validate upcoming releases, and nimbly incorporate cutting-edge innovations and the latest models. 

These datasets, along with hand-crafted design principles from linguists and conversational experts, guarantee we use the best model for a given question at a given turn in the conversation.

Our partnership between big data and human expertise gives us a significant advantage and saves our customers time and money, foregoing their need to incur the cost of gaining access to several LLMs.

Replicant customers don’t have to learn conversational principles from scratch, and don’t need to worry about regression testing each new LLM release.

Replicant Benefits Callers While Protecting Against LLM Risk

Replicant has built a layer of guardrails that ensure LLMs follow your contact center’s prescribed workflows and scripts, just as you’d expect your agents to do. Out-of-the-box LLMs hallucinate, ignore instructions, and are susceptible to prompt injection attacks that can leak sensitive data to unauthorized callers.

Our proprietary technology, the Replicant Language Layer, builds a firewall between the input and the LLM and what is ultimately spoken to the caller. The result is a Thinking Machine that follows your scripts and workflows by validating every piece of data we receive from LLMs.

We also ensure the same guardrails are in place around sensitive data such as Personally Identifying Information (PII). While Replicant is SOCII Type 2, PCI, and HIPAA compliant, many LLM providers do not yet have these certifications.

The Replicant Language Layer selects the appropriate set of AI models for each turn of a conversation, giving our customers the same peace of mind they have always enjoyed when dealing with sensitive information.

Replicant’s Advantage: Years of Conversational Experience

LLMs are another powerful statement in Replicant’s continuing promise of better customer experiences through automation.

With years of deep expertise in telephony, analytics, conversation design, continuous improvement, self-service, omnichannel, and security, we pride ourselves in trusted advisorship for brands to resolve customer service issues at scale.

At our core is a technology purpose-built for effective human-to-machine communication for customer service.

We know how to craft great conversations – a complex dance that begins with an honest greeting, just-in-time data lookup, and a recognition of why the caller should give the Thinking Machine a chance when countless other automated systems have failed to meet expectations. 

We maintain initiative in the conversation and can gracefully handle when the user asks us to hold, to repeat, or to go silent. We understand that in any data collection turn (e.g. asking for a PNR for a flight) the user may say “I don’t have it” or may need help finding it, and that if a user requests an agent then informing them of the wait time can convince the caller to re-engage.

There are, quite literally, hundreds of micro-optimizations used by every Thinking Machine to optimize customer experiences. 

Even in this brave new AI world, LLMs must be “told” to handle these scenarios. And it’s through comprehensive dialog policies that we build these features right into our platform and SDK.

Replicant’s customers don’t need to anticipate every edge case from scratch and program them into their automation manually – Replicant’s platform has it all built in.

AI holds the ability to bring a wealth of promise – and potential peril – to your contact center. Come learn why contact center leaders continue to choose Replicant as their trusted partner for Contact Center Automation.

design element
design element
Request a free
call assessment
Schedule a call with an expert