The history of AI is the history of thinking. It has roots in philosophers like Jean Piaget (1896), Seymour Papert (1928) and Hal Abelson (1947) who helped establish a few key principles, like the idea that no two people think the same. That the best way to learn is by doing. And that computer science is actually as much of an art as it is a science.
But as Replicant CTO and Co-founder Ben Gleitzman pointed out during Resolve 2022, the groundwork that was laid for AI decades ago didn’t necessarily mean its success would be linear. Here’s how decades of AI advancements have led us to today’s axis point in human-to-machine interaction:
The Deep Learning Revolution
In 2012, the deep learning revolution began. The classical knowledge that laid the foundation for AI gave way to automated image classification – using AI to identify what elements make up a given image. Initially, only one out of three graphic classifications worked. It wasn’t until 2015 that machines truly surpassed humans by identifying things most humans didn’t know, like an obscure dog breed or location.
In 2017, the revolution had its sequel. AI had read and viewed such a large portion of the internet via powerful transformers like GPT-3 that it essentially had learned everything about what humans have produced. It was using one billion parameters to understand the world, and today uses 100x that to understand the context of a question or conversation.
The bitter lesson
Even after the Deep Learning Revolution, AI still faced one big problem: GPT-3 understood nothing. It didn’t possess the foundational grounding to be exact in its responses to humans – to understand what a human was asking at its core, especially in complex situations. This is the reason why contact centers are unable to simply throw thousands of call recordings into a large language model and expect an effective solution.
Unlike customer service agents, who are always focused on a goal, AI will often talk just to talk – unconstrained without proper guardrails. The question became: How do we get the best of humans to combine with the best advancements in AI? There needed to be human and machine collaboration in order to harness the models of 2012 and 2017 into a focused solution that could make it useful for business.
The art and science of Thinking Machine™️
To harness the power of AI, Replicant focuses our technology on specific customer service applications. We harvest unstructured data to allow businesses to understand the voice of their customers at scale. This results in more accurate intent data whereas IVRs return incomplete data from customers pushing buttons to simply get to an agent. We focus our Thinking Machine™️ to engage in shorter and results-oriented conversations that progress interactions at every turn. And, we provide a testable and consistent experience that can identify and combat bias quickly. All in, this allows agents to slow down, focus on calls that only humans can answer, and make the most of their time.
So, can machines really converse now?
Decades of AI innovation have led to one answer: a resounding yes. Machines can have creative, impressive, and oftentimes jaw-dropping conversations with humans today. But for businesses, the question that interests them most is can machines have effective conversations now? In this case, the answer requires both the art and science of AI. There is no magic cauldron that can create automated conversations businesses can confidently deploy to millions of customers.
However, using the principles of effective conversation design, customer-ready AI has become economical, transformative and repeatable. Replicant’s Thinking Machine™️ can talk to customers over any channel in any language, and even converse with other machines for fully automated two-way tasks. It can build trust, continuously improve, measure success, and deploy quickly in any environment. For CX leaders in search of a solution that addresses their most pressing challenges, this is the answer they’ve been waiting for.