AI in the contact center

Learn to Identify the Five Interaction Categories that are Most Effectively Handled by AI-Powered Virtual Agents

Just about everyone has an IVR horror story to share. Being forced to listen to repetitive menus. Getting stuck in an endless cycle of trying to navigate through options that don’t meet their needs. Wanting to throw the phone against the wall in frustration after hearing an automated voice announce, “I’m sorry, but I didn’t get your response.”

We all know the limitations of IVR systems. Yet too many companies still seem bound and determined to deploy antiquated, poorly designed call automation that not only doesn’t work well but is difficult to implement and expensive to maintain. This makes even less sense when there is a more time- and cost-efficient option available.

Over the past several years, the evolution of conversational AI and delivery over the cloud has enabled businesses to go far beyond the boundaries of traditional IVRs. Companies of all kinds are automating more conversations than ever before while maintaining…and often improving … the customer experience by adapting omnichannel solutions utilizing AI-powered virtual agents.

Even in the earliest stages of development, AI-powered virtual agents were able to handle many tasks that were formerly the sole province of live agents. Conversational AI enables virtual agents to automate the routine and repetitive call types that formerly took up much of a live rep’s day. The same type of simplified experience can be scaled to chat and text channels as a unified application. Virtual agents are powered by a centralized cloud-based AI “brain” that connects to a business’ customer data via APIs. With the broad variety of available tools to replicate the best live agent behavior, virtual agents exceed the capabilities of touchtone IVR, directed dialog, and simple chatbots for customer service.

Producing more productive conversational AI solutions is an ongoing process that requires constant monitoring and refinement. This involves machine learning, building out language models, customizing and weighting the acoustics to grammars on every single question to match the phrases companies think they heard versus what they know they are listening for to get the best possible speech recognition. Most suppliers start by focusing on the voice channel, which offers the greatest potential for rapid ROI. They then move onto scaling the application digitally to accommodate chat or text.

The most effective virtual agents can communicate at the real-time pace of a conversation, understand complex dialogue, and perform in a fashion that emulates a company’s top agents on all channels. While every contact center has a pool of live agents, whether in-house or remote, forward-thinking companies now have a pool of AI-powered virtual agents to handle the routine calls and chats that don’t require complex critical thinking or judgment. By doing so, these organizations are positioned to upskill their live front-line reps to handle only the interactions that genuinely require human intervention.

SmartAction, a recognized innovator of purpose-built AI-powered Virtual Agents for customer service, has found that in its experience of designing and deploying new AI-powered virtual agents for voice interactions, the self-service application consistently falls into one of five distinct categories. They have validated this formula with more than a hundred clients encompassing hundreds of use cases across 12 industries.

What are these categories? SmartAction will detail them in a complimentary live webcast, appropriately  entitled “The 5 Categories that Rule Virtual Agents”  on CrmXchange on Thursday, October 1st at 1:00 pm. They will show businesses how to understand and to broadly classify its interactions to ensure that a live human agent should never handle the ones that can be best addressed by AI-powered virtual agents. Among the areas to covered are:

  • Determining which specific categories are best suited for exclusive AI applicability
  • Comprehending the top use cases driving conversational AI adoption
  • Real-world examples from 6 leading companies

The presenters are industry veterans with proven expertise in helping organizations deliver frictionless customer experiences via conversational intelligence. Brian Morin, CMO, SmartAction has been instrumental in helping the company achieve its status as the top-rated Virtual Customer Assistant solution on Gartner Peer Insight and distinction as “The Leader in AI-Enhanced Self-Service” by Frost & Sullivan. Mark Landry, SmartAction’s VP of Product, began his career as a Lucasfilm intern  and become an award-winning screenwriter for Nickelodeon, The Disney Channel, and Amazon Studios. He is a CX designer who has designed human-to-AI interactions for more than 100 brands including DSW, AAA, Electrolux, Choice Hotels, and others. They will be joined by Marilyn Cassedy, Director of Customer Success, SmartAction who oversees the relationships with high-level clients to ensure they are receiving full value.

Register now for this eye-opening webcast. If you are unable to attend the live presentation, a link to the recording will be posted within 24 hours after the presentation.

AI-Driven Modeling to Improve the Agent and Customer Experience

Are traditional analytics and contact center practices enough to drive customer satisfaction? During this live Virtual Conference webcast, Larry Skowronek and Michelle Carlson from NICE Nexidia lead a conversation about how AI-driven data modeling can be the key to achieving greater success. To further explain, Larry and Michelle walk through the state of analytics today, an overview of sentiment analytics, an overview of predictive behavioral routing, and how to combine sentiment and predictive behavioral routing to maximize customer satisfaction and drive progress.

Today, we generally see a large disconnect between business and how they evaluate customer interactions. Eighty percent of companies claim they deliver “superior” customer service, while in reality, only eight percent of their customers actually agree. This is partially because the state of measuring customer satisfaction is deeply flawed. Manual reviews of calls that require a human to evaluate transactions lead to highly subjective, interpretive, and inconsistent feedback, which not only requires higher costs, but also fails to move the needle forward.

Customer contact centers are a dynamic and evolving animal. The only way to respond to change is with change. Enter: Sentiment Analytics. Sentiment Analytics is a way to use machine learning to train a model that measures whether our customer interaction was positive, negative, or neutral on a granular scale. The machine can take our otherwise subjective behaviors and turn them into subjective data that is highly valuable and actionable. This data is consistent, accurate, and without bias. Most importantly, because it is a machine, it can do as much work as we throw at it, so we can receive and analyze data for every single customer interaction.

This AI-based model has proven to be statistically accurate, according to several CX centers that use it. But how exactly does this model measure customer satisfaction. The model reliably measures every interaction, including:

  • Spoken words, like “Awesome”, “I’m annoyed”, and “This is ridiculous”.
  • Laughter detection.
  • Pitch and tone.
  • Cross talk: customer and agent interrupting one another.

These models may also differentiate calls that start positively and end negatively, indicating worst practices, as well as calls that start negatively and end positively, indicating best practices. The reliability and accuracy of these models have allowed businesses to gain deep insights on the overall customer experience and quickly translate those insights into action. Finally, these models create a hyper-personalized customer experience. This is a monumental advantage, as eighty-four percent of customers say that personalized customer experiences are key to winning their business.

For a perfectly personalized customer experience, sentiment models can aid in Predictive Behavioral Routing (PBR), which uses sentiment analytics to match the customer to the appropriate agent and therefore improves the overall customer experience. By bringing Sentiment Analytics and PBR together, businesses can seamlessly operationale their sentiment data by:

  • Calculating customer sentiment on 100% of interactions
  • Using this sentiment combined with personality, make the best connection for the customer.
  • Immediately improve customer experience with AI-powered routing.

So, what does this process look like in real time? In one example, a Fortune 500 company’s customers were initially being transferred all over the contact center. They then optimized their customer calls based on sentiment dada. Here’s what happened:

  • They saw a 15% decrease in negative sentiment on PBR (predictive behavioral routing) routed calls.
  • They saw a 13% increase in positive sentiment on PBR routed calls
  • They saw a 6.4% decrease in average handle time in PBR routed calls
  • This required 0 hours of coaching, training and employee change management.

The combination of sentiment and behavioral routing will improve customer satisfaction metrics, reduce costs for manual listening and surveys, improve customer satisfaction via targeted coaching and performance management, and increase employee satisfaction. Your analytics practices are valuable, but should be evolving to keep up with dynamic consumer expectations. Your employees and customers alike will thank you for it.

To listen to the full webcast click here: https://bit.ly/2ULJgPB

Can we build machines that understand us?

Tobias Goebel,  Mar 2020

The question of whether we can build machines that truly think is a fascinating one. It has both practical and philosophical implications, and both perspectives answer a key question very differently: how close to the real thing (human thinking) do we need to get?” In fact – does rebuilding the exact human ways even matter? And are we too easily impressed with anyone claiming they have accomplished this Franksteinian feat?

From a purely practical perspective, any machine that improves a human task on some level (speed, quality, effort) is a good machine. When it comes to cognitive” tasks, such as reasoning, or predicting what comes next based on previous data points, we appreciate the help of computer systems that produce the right outcome either faster, better, or more easily than we can. We do not really care how they do it. It is perfectly acceptable if they simulate” how we think, as long as they produce a result. They do not actually have to think like we do.

The question of whether machines can truly think has become more relevant again in recent years, thanks to the rise of voice assistants on our phones and in our homes, as well as chatbots on company websites and elsewhere. Now, we want machines to understand — arguably a different, more comprehensive form of thinking. More specifically, we want machines to understand human language. Again we can consider this question from two different angles: the practical, and the philosophical one.

John Searle, an American professor of philosophy and language, introduced a widely discussed thought experiment in 1980, called The Chinese Room. It made the argument that no program can be written that, merely by virtue of being run on a computer, creates something that truly is thinking, or understanding. Computer programs are merely manipulating symbols, which means operating on a syntactical level. Understanding, however, is a semantical process.

Searle concedes that computers are powerful tools that can help us study certain aspects of human thought processes. He calls that weak AI”. In his 1980 paper, he contrasts that with “strong AI”: But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.

Cognitive states are states of mind such as hoping, wanting, believing, hating. Think (sic!) about it: proponents of strong AI, and they do exist, claim that as soon as you run an appropriately written computer program (and only while it is running), these computers literally are hoping, are wanting, etc. That surely must be a stretch?

Searles thought experiment is summarized by him as follows:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

Goebel Cartoon

This is a simple but powerful thought experiment. For decades now, other philosophers have attempted to shoot holes into the argument, e.g. claiming that while the operator him- or herself might not understand Chinese, the room as a whole actually does. Yet all of these replies are eventually refutable, at least according to Searle, so the argument is being discussed and studied to this day.

Strong AI is of course not necessary for practical systems. As an excellent example of that, consider the social chatbot Mitsuku. (A “social bot” has no purpose other than to chat with you, as opposed to what you could call functional or transactional chatbots, such as customer service bots.) Mitsuku is a five-time winner (and now a Guinness World Record holder) of the Loebner Prize, an annual competition for social bots. She is entirely built on fairly simple “IF-THEN” rules. No machine learning, no neural networks, no fancy mathematics or programming whatsoever. Just a myriad of pre-written answers and some basic contextual memory. Her creator is Steven Worswick, who has been adding answers to Mitsuku since 2005. The chatbot, who you can chat with yourself, easily beats Alexa, Siri, Google, Cortana, and any other computer system that claims it can have conversations with us. (Granted: none of the commercially available systems do claim that social banter is their main feature.)

Certainly, Mitsuku by no means aims to be an example of strong AI. It produces something that on the surface looks like a human-to-human conversation, but a computer running the IF-THEN rules is of course nowhere near a thinking machine. This example, however, shows that it neither requires a machine that “truly thinks”, nor a corporation with the purchasing power of an Amazon, Apple, or Google, to build something that serves a meaningful purpose: a single individual with a nighttime and weekend passion can accomplish just that. And Mitsuku, with its impressive ability to chitchat for long stretches of time, is meaningful to many, according to the creator.

Goebel Mitsuku Graphic

 

It is easy to get distracted by technological advancements and accomplishments, and the continuous hype cycles we find ourselves in will never cease to inspire us. But let’s make an attempt to not let them distract us from what fundamentally matters: that the tools we build actually work, and perform a given task. For chatbots, that means that they first and foremost need to be able to have a meaningful conversation in a given context. Whether they are built on simple rules or the latest generation of neural network algorithms shouldn’t matter. Despite that concession, it will probably remain forever human to marvel at advances towards solving what might be the biggest philosophical question of all: can we ever build a machine that can truly understand?

What New Paths Will Companies Take to Shape the Customer Journey in the Years to Come?

As the time-honored adage puts it, ‘a journey of 1000 miles begins with a single step.’ These days, the journey a customer takes when engaging with a company may be far more geographically limited but usually starts with a lot more steps. The ever-evolving customer journey incorporates varying interactions and experiences that take place on different touchpoints: a website visit for research, a call with a sales rep or chat with an agent, a conversation on social media or online review site, an inbound call, and even an in-store retail encounter.

It has become more important than ever for a business to take advantage of every possible resource to understand its customers: their wants, needs, and expectations, their thoughts and opinions and feedback and expectations. Building this knowledge will enable companies to deliver the highly personalized customer experiences that are becoming more crucial all the time in an increasingly competitive marketplace where consumers are offered a constantly growing array of options.

Given access to vast resources of data and technology, the customer journey today has morphed dramatically from where it was even five or ten years ago. And every company’s success depends upon combining the right technologies with the agility needed to effectively manage all the interactions that take place on every channel along the way.

Gazing into the future, which often-predicted developments will come to pass? Will the migration to the cloud finally encompass all businesses and make service more responsive? Will messaging ultimately surpass voice as the communication channel that is most compelling for businesses and consumers alike? Will digital transformation extend its reach deeper into the contact center environment to better leverage profile data, more closely examine customer feedback, and measure sentiment? Will customers expect greater availability of agent support that involves the use of screenshots, photos and video? And how will the growing use of AI-powered solutions progress, both in terms of those that provide more effective self-service options and those that support the development of more highly specialized agents?

Of course, no one can foresee every possible path the customer journey will take in the coming years, but CX and contact center executives and managers have an opportunity to get a cogent vision of many of the most important changes in an upcoming complimentary roundtable webcast on CrmXchange. On Thursday, December 5, at 1:00PM ET, NICE Nexidia and RingCentral will team up to explore “Smooth Customer Journey- Predictions for 2020 and Beyond.

Ken Brisco, Senior Product Marketing Manager, NICE Nexidia, who is responsible for establishing the scope and message as well as the competitive advantages of NICE’s Customer Journey Optimization Solutions within the CX space will be joined by RingCentral’s John Finch, AVP PMM, Customer Engagement, an executive with an extensive background in developing strategy for global customer engagement. Among the topics they will cover are:

  • How AI-driven analytics can boost customer loyalty and retention
  • The importance of measuring quality across all channels
  • In what ways bots are best able to collaborate with humans
  • How macro to micro-level journey analysis drives deeper insights into customer engagement

Register now for this insightful look into which near-future developments may change the way your organization helps to orchestrate the customer experience. If you are unable to attend on December 5, you can access the recorded version approximately 24 hours after the live presentation.

 

5 Ways to Use AI in the Contact Center

Artificial intelligence (AI) isn’t about replacing live, human agents with robots; it’s about supporting the agent by handling more routine issues so customers can get live help for their complex issues.

Here are 5 ways contact centers can use AI.1. Replace simple IVR processes.

1. Replace simple IVR processes. Basic IVR can do something like transfer a call to the sales department. AI takes this several steps further thanks to machine learning and natural language processing, allowing it to understand what the customer is saying (as opposed to just giving the customer a string of choices). AI can get rid of annoying queues and “If A, then B” action sequences and replace them with smarter, more human interactions.

2. Act as an agent assistant. AI can function as an assistant by sitting on the desktop, collecting customer info via a bot that’s currently serving the customer, then alerting the agent about what their next steps should be. This goes back to the overall purpose of AI to help the agent perform better and to work smarter instead of harder.

3. Be part of the quality assurance team. AI solutions can analyze agent and customer conversations and give live feedback to team leaders and QA teams about both what is being said and how it’s being said. AI listens and interprets more than just words  it can also ascertain stress level and clarity of speech.

4. Help stabilize workforce management.AI can not only predict upcoming spikes in communication based on data but it can also recruit agents to fill in the gaps in the schedule. Also, since AI can handle a number of more basic customer needs on its own, it reduces the number of employees needed at any one time and levels out major peaks and valleys.

5. Improve the customer experience. AI can analyze the customer journey to determine where the hottest touchpoints are as well as different areas for improvement. It can also understand customer patterns and predict experiences in order to deliver an excellent experience before the customer even realizes what they need next. AI is finding its way into all sorts of brands, organizations and business processes. One of the places where it’s making the most impact is in the contact center. Managers are using AI to create better experiences for everyone, from agents and supervisors to the customers themselves.

How AI Impacts the Future of Contact Center Agents

For the past few years, artificial intelligence (AI) has been a major topic in business. Right now, one of the best uses for AI algorithms is to process data and then make recommendations or predictions related to the customer. There are more advanced ways to employ AI too, though, many of which interact directly with the customer. By adding newer types of AI to the contact center, another question arises: what does this mean for live agent support?

Invisible vs. Visible AI

Much of AI runs in the background, making it invisible to customers. For example, AI can analyze history and trends to make customized recommendations. While customers may not know how, they certainly know that their experience is improving. Visible AI, on the other hand, interacts directly with the customer. An example of this is Siri for iOS, a digital assistant that you can interact with relatively naturally and without looking at your device.

The invisible types of AI have been around for a long time. They’re mature, and they effectively improve and streamline processes. Visible AI, though, isn’t nearly as mature. There’s more room for error, and since customers are dealing with it directly, problems need to be swiftly addressed by live agents.

AI Accuracy and Live Support

There’s often a question of how much AI will replace humans in the workplace. It’s unlikely that AI will ever be 100% accurate, which is why a live support team is still integral to the contact center. Any AI that’s customer-facing requires a support team behind it that can quickly jump in if a mistake is made. Live agents are the only ones able to handle tasks that require empathy, in-depth decision-making skills, and high-level understanding.

Furthermore, some experts think that it’s best to keep AI and human-ness separate. They feel that no matter how much tech advances, AI is never going to be truly human – and maybe it shouldn’t try to be.

Smarter Hiring with AI in Mind

As modern contact centers integrate AI into their routine processes, there should be more of a focus on communication, creativity and empathy skills when hiring agents. Once you know which tasks you’ll automate using AI, you can hire people who match the skills needed for the tasks you can’t or won’t automate.

How do you use artificial intelligence in your contact center?