It’s Not an Exact Science
Measuring agent performance has never been an exact science. I should know. I spent years managing agents, attempting to provide the constructive coaching my agents needed to meet their goals and provide the best possible service to our customers. Unfortunately, not having the information needed to do so often proved to be a nearly insurmountable challenge.
Here’s how this process always worked for me – I suspect many of you reading this will be nodding your head as you have experienced a very similar scenario.
Pattern or Exception?
It’s the first of the month and I need to perform my assessments on my agent, Mary Stewart. I pull five of Mary’s calls from my call recording software to review. The problem is that these are random calls, and could be about any number of issues that Mary typically handles.
We’re a cable service provider and in the second call I’m listening to, Mary tries to help a customer who has no picture. Mary schedules a service technician to go to the customer’s home, without attempting any troubleshooting steps like first sending a signal to the box. This goes against company policy and will affect her bonus if she has too many unnecessary “truck rolls.”
However on this call, the customer was impatient and claimed they had this problem before, so I don’t know if Mary would have tried to troubleshoot had the caller reacted differently, or if this is indicative of the fact that she really needs more coaching or retraining on this topic.
A Fruitless Process
What I need is more examples where a customer calls in with a similar problem to see how Mary handles the issue, so I have a better sense of her skill set in this area. The only way to find these examples is to set off on what’s essentially a wild goose chase, pulling additional calls from the recorder, listening to a few seconds of each one, and trying to determine if it fits the bill. But time eventually catches up with me. I have 25 more agents to review and I can’t afford to go through this fruitless process, so I give up.
When it comes time to review with Mary, we discuss the call, she tells me it was an anomaly, I give her a few pointers, and hope for the best. Next month rolls around, and I want to see if Mary’s doing better with these type of calls. Once again, I’m mostly out of luck. With only random calls to choose from, it’s hit or miss if the ones I get are related to a truck roll. So as before, I do assessments on the calls I have available, without knowing if she’s making progress in this key metric.
And at the end of the quarter, when the numbers come in, I see Mary does in fact have a higher than average number of truck rolls. She did need more targeted coaching. But not only did I not have good examples of her calls, I didn’t have examples of other agents who were doing it well to use as best practices. Finding those calls would have required an even greater hunting expedition.
Wasting Time on Broken System
In the end, no one benefits from this broken system. The company needlessly spent money on truck rolls that didn’t need to happen. I, as the supervisor, spent a lot of time searching for the right calls to review and coach against, without stellar results. The agent missed her bonus because she didn’t receive the training and coaching she needed. And some customers probably needlessly waited for service calls when an agent in the know could have solved the problem over the phone.
Getting the Right Information
What supervisors need is a way to have the right calls for every agent, without the fruitless hunting and pecking. If I only knew which agents to talk to, and about what, then the time I spent coaching could finally be effective and valuable. I need more than the random sampling, hope to get lucky method. If I had a method that would easily score agents against 100% of their calls, not just a random sample, and have the calls categorized based on the company’s metrics or goals, this would ensure that I always know which areas my agents struggle with and that I have calls available to coach against or use as best practice examples. What’s needed is the interjection of Interaction Analytics into the quality management process.
[Photo: "Greylag Geese Flying" by Nathan Goddard via Flickr]