Friday, September 18, 2009

HeartBeat™ by the numbers

I was fortunate enough to have an idea for a webinar accepted by the Contact Center Performance Forum (CCPF) a couple of months ago & the event came off without a hitch on August 6th. The topic was “Great Customer Experiences Start with Consistently High Performing Technology,” the expected audience call center managers – the people managers, not the IT wonks. I’m used to talking more about the performance of the machines in the contact center than the impact on the agents, but the hook for this webinar was you’re not going to have even half a chance at a positive experience if the technology required to handle & deliver the calls to agents falls down on the job. When callers finally do get through they’ve already been preconditioned with a lousy experience, and who will they take it out on? The CSRs of course! The moderator worked me over a bit and kept saying “Mike, you’ve got to make this real for people. Give us some real numbers. And don’t forget, agents are people too!”

So I did some digging and I was more than a little surprised.

One of the services we offer is surveillance for self-service solutions in production, the ones you often deal with before 0-ing out to get to an agent. This is our HeartBeat™ service. HeartBeat™ generates test calls one-at-a-time around the clock to access systems via the PSTN to ensure they are available & working as intended; if not, an automated notification is generated.

During a typical month we generate anywhere from half a million to 600,000 HeartBeat interactions. Would you believe that month after month, anywhere from 4% to upwards of 6% of those interactions encounter some kind of availability or performance issue? I know the HeartBeat value prop really well, but even I was surprised… 5% issues on average? Really??

Yes, really. Who knew?

Shan did. Shan manages our HeartBeat team. He lives this every day, along with Mike2 & Evan. They make up the team that defines the test cases, configures the servers, figures out what’s ok and not ok, who the system should call when there’s a ring-no-answer vs. a host down. It’s a whole lot more than just making a phone call and checking for answer.

So I asked Shan to tell me how things break down - literally – here’s what he told me:

Correcting for repeat issues (ones that last for a while and are therefore detected over & over again), here’s the distribution:

40% - Issue with answer – busy, ring-no-answer, silence or click
40% - Caller-requested information unavailable – host issue
20% - Caller disconnected prematurely

So out of 600,000 test calls in a typical month, 12,000 are answered incorrectly, or not at all. Another 12,000 are customers being led on a wild goose chase all the way to the point of finally being able to retrieve the info they need only to find out it wasn’t actually available. And another 6,000 callers are just getting cut off – they get to start from scratch.

But back to the webinar for a minute…

So here you are a call center manager working hard to keep your people and your customers and your business units all happy. You’re up-selling and cross-selling while cutting costs & keeping attendance high & training agents to deliver the best possible customer experience. And all the while you don’t know if the technology you’re counting on to take care of your customers & offload your agents is doing its job or not. A batting average of 350 gets you noticed in the majors and 94% is probably an A when grading on a curve. But how does that rate in the contact center?! Can you really afford 5% of your customer interactions going sideways?


Mike Burke


http://www.iq-services.com/
6601 Lyndale Ave South, #330
Minneapolis, MN 55423

No comments:

Post a Comment