Friday, April 22, 2011

CEM: Current Buzzword, Old Methods

Do you ever get that feeling of being run over by a new trend or acronym that seems to have come out of nowhere? Welcome to CEM (Customer Experience Management). In the last few months not only has this new buzzword gained ever increasing levels of hype but it is being applied to everything, everyone and all products and services being sold today. So here we are in another hype curve at the moment. This feels a bit like when “CRM” came on in the late 90’s. Anyone remember “360 degree view of the customer” or “Customer Intimacy?” We heard all those promises and visions of turning the contact center into a profit center while simultaneously increasing ROI, reducing churn and delivering customer nirvana.

Do you remember life before and after CRM? Did you get the promised results? If not, what will change with the new buzzword of the decade? CEM holds similar promise as a methodology for increasing customer satisfaction. Promises made by vendors today tout the virtues of tools, metrics, analysis etc. Don’t get me wrong. I agree with the premise. I believe there are lots of good tool sets and methods out there today. I also believe there are tools and technologies inside your company right now that are not being leveraged to the fullest extent.

We are seeing a new set of organizational changes that are addressing the problems. Titles are showing up in companies like Chief Experience Officer, Director of Customer Experience, etc. Yes, the movement has momentum. Companies are realizing there is something to be done to improve customer experience – something they can control. I read in a recent article that “if all your products are the same as your competition, the differentiator has to be service”. How can you disagree with that? The global economy has commoditized all kinds of goods. Is one EBAY purchase significantly different from the next?

So how or should you implement a CEM solution? Earlier in my career, I worked as a practice leader in the CRM/Contact Center field looking at business processes and technologies to improve operations. Things haven’t changed too much. Traditional consultative methods still play nicely in today’s “new” market. Initiatives will start in the requirements definition/gathering phase. We’ll do a current state analysis to better understand our people, processes and technologies. We’ll look at where we need to go (future state) and what we need to do to get there (gap analysis).

Of course, it has always been harder than it sounds. But it is a proven approach to some pretty big problems.

Tony O'Brien
IQ Services
612.243.6700 | www.iq-services.com

Wednesday, November 3, 2010

IVR, Agents & Business Rules = Customer Satisfaction

During internal meetings last week, we got side-tracked on the subject of “the customer satisfaction silver bullet.” As we all know, there is no such silver bullet. A lot of folks talk like they have the only answer. But at the end of the day, it takes a great deal of coordination, information and determination to manage and improve the technology, people and processes that influence customer satisfaction.

Some services and methods help clients optimize the performance of technologies that support customer interactions like IVR and CTI. Some services capture the voice or opinion of real customers after using IVR and other contact center solutions (e.g., CallerBeat™ Real-Time Customer Experience Interviews). But somewhere in all the data provided via remote monitoring, internal measurements, analytics, customer surveys and more, companies find meaningful indicators about how to best manage technology, people and process in way that optimizes customer satisfaction. Does that mean every customer is 100% satisfied after every interaction? Absolutely not. But just because we can’t achieve the goal, doesn’t mean we should stop trying.

And no matter how we come at the issue, there’s always another meaningful perspective and source of actionable data.

Do you have any questions or comments about this blog? We would love to hear from you! Please send us an email!


Marla Geary

Wednesday, July 28, 2010

Hosted IVR: Letting Go Is Hard to Do

It seems so logical. IVR and other communications technologies evolve all the time. Each new advancement offers cost savings, efficiency and/or end-user benefits that can’t be ignored. And hosted IVR providers allow businesses to take advantage of these advancements without huge upfront investments.

In today's economy, if something isn’t part of your company’s core business, it is clearly a candidate for outsourcing or hosting – IVR is no exception. With hosted IVR offerings popping up on every corner, it can be hard to justify keeping IVR functionality in-house.

So why is it so hard for some of us to make the transition? There are security concerns of course. But these concerns can be effectively overcome in a variety of ways. Based on discussions with many of our clients, the pain really seems to be letting go of the control and first-hand insight. You and your team know how critical the IVR is to your business. If it doesn’t perform, there will be consequences for both you and your customers. If you move to a hosted IVR solution, you’ll have to rely on a middleman of sorts to interpret results and keep you informed. If problems arise, you’ll have to trust someone else’s team to solve the problem with the same vigor and concern that your team would exhibit. Despite assurances and SLAs, this leap of faith can be too risky for some teams and businesses.

But of course there is a way to move to a hosted IVR solution with much less stress and worry.
Outside-In Monitoring is a straight-forward, efficient way to stay on top of the performance of your IVR solutions and hosted-service providers at the same time. By monitoring the end-to-end solution with transactions that do the things real customers do, our clients know their hosted, in-house and hybrid integrated IVR solutions are working as expected and delivering the desired customer experience.

One contact recently told us they couldn’t imagine moving to a hosted IVR provider without the objective, real-time insight we can provide through
Outside-In Monitoring. If you and your team are concerned about making the change or you want more insight into the performance of an existing IVR application, please give us a call. We can give you peace of mind that you’ll know exactly how your IVR is working.

Marla Geary

http://www.iq-services.com/
6601 Lyndale Ave South, #330
Minneapolis, MN 55423

Thursday, May 6, 2010

You’re Taking It Out of Context

Living with a writer for the last 36 years, one of my favorite books has become “Eats, Shoots & Leaves: The Zero Tolerance Approach to Punctuation.”

The cover of the book shows two pandas, the vegetarian whiting out the comma, and the NRA member walking to the right brandishing a handgun.

The point is, of course, that punctuation matters. Why? Because punctuation establishes a context for words and thereby turns them into a thought & gives them meaning. Of course it’s not just about punctuation – content has to be coherent for there to be identifiable meaning – but context certainly matters.

How many times do we hear “I know I said that, but you took it out of context and that’s not fair?” And what that really means is that when we take something out of context, it can be misleading.

If you run a contact center, you’re no doubt swimming in numbers from all of your analytics, metrics and KPIs. It’s all important data of course, but without context, data are merely numbers that’ll drive you to distress. You may be using Tivoli or HP Operations Manager (formerly Open View) to help you organize your data and give it structure. Structure is certainly important, but I assert structure does not establish meaningful context.

To truly have context, you also need perspective. “So what am I getting at?” I hear you muttering.

You need to establish context for the metrics you get from your contact center technology instrumentation, and to do that you have to add perspective to the mix…your customers’ perspective.

Here’s what I mean…

All that data you get from your high-level system monitoring solution can in fact be nothing more than noise when there’s no context. Many times I’ve heard “The first thing we do is shut off the audible alarms because with a network like ours, there’s always something broken and if I jumped every time I got an alert that a server or segment had an issue, I’d be spending all my time chasing ghosts. Most of the time it’s just not that bad.

What you really need to know is when the customers’ experience is being adversely impacted by a technology issue so you can use the raw data from your internal monitoring systems to identify the bad actor and get back to business. You can’t try to pursue every hiccup as though it were the end of the world – unless it is. And if it is, you need to know NOW. Whether the issue is trivial from the inside out perspective or a major failure, the context of what’s happening to customer is what you need to know so you can appropriately direct you attention.

Mike Burke
http://www.iq-services.com/
6601 Lyndale Ave South, #330
Minneapolis, MN 55423

Wednesday, April 7, 2010

Monitoring! Monitoring! Monitoring!

Whenever we start talking about our remote availability and performance monitoring services (HeartBeat™), like I did just a few weeks ago during a monitoring webinar, there is invariably a little confusion at the outset about the differences between remote availability and performance monitoring, call recording/agent quality monitoring, and voice quality monitoring. So I thought it might be helpful and interesting to define these different monitoring methods in a blog post. And I promise to try to do it without talking like an engineer (too much). So here goes…

Call Recording/Agent Monitoring/Quality Monitoring refers to applications that record and capture data from conversations between customers and call center agents including tracking the agent desktop transcript during customer interactions. The application allows supervisors to playback an agent-customer conversation and at the same time see exactly what the agent was doing during the call. Some of these applications involve speech analytics to assist with call trends analysis. This information allows contact center supervisors to assess agent performance and business rules in terms of efficiency and appropriateness, as well as to coach performance and offer self-paced training to agents who need it.

Voice Quality Monitoring in the IP Telephony world is typically an automatically generated mathematical assessment or grade that asserts how good a call sounded and how it would be rated by people if they were listening to the call at the same point the measurement is taken. There are different ways to calculate Voice Quality; I’ll try to provide a few simple definitions of a few of the common methods:

  • PESQ asserts what listeners would think based on the technology involved in the process and the measured performance of the network carrying the traffic.
  • QoS is a way to look at and rate multiple characteristics about a call so ultimately you can try to improve on each of the characteristics until the quality is satisfactory.
  • MOS is literally a subjective assessment by a bunch of people in a room voting on how good a call sounds.
  • R Factor is a number or score that tries to quantify the subjective MOS assessment made by a bunch of people in a room.

Suffice it to say that a sniffer (or router) that watches voice samples and packets going by can determine how much packet loss, jitter, and delay is happening at the network segment where it’s inspecting the packets. It can also score the call, and report the score on an instantaneous or call by call basis. That’s voice quality.

The servers that make up an IP telephony implementation, whether contact center or unified communications, are constantly monitoring and reporting on voice quality and notifying staff in some fashion if voice quality gets out of an acceptable range. It’s really important to note that in an IP world, voice quality can be really great in one spot, and really lousy somewhere else. And depending on how the path is stitched together, the voice quality numbers reported can be 100% accurate but misleading, i.e., the audio could be completely unintelligible for some reason, but the network could be carrying that garbled audio perfectly, resulting in a perfect score. On the other hand, the network may distort the audio somewhat due to packet loss or jitter, resulting in a less than perfect calculated score, yet the message could easily be intelligible by a real person. Fun, huh?

Remote Availability & Performance Monitoring is an external monitoring method that periodically calls or interacts with self-service customer facing solutions to ensure they are available and performing as expected. It is external because the transaction is generated outside the system being monitored just like a real end-user transaction.

Let’s step back a minute. Think about how a contact center is put together. Now overlay the sequence of interactions a caller has with the self-service or communications technology and all its supporting functionality (e.g., switching, routing & hunting, speech reco and text-to-speech technologies, data access & retrieval methods, CTI screen pop, etc.). Now associate each step of a typical telephone call with a unique part of the contact center’s self-service infrastructure including CTI and routing processes required to transfer a call to an agent as well. Because each test call follows a carefully defined script from the time the equipment goes off-hook and dials all the way through the end of the call, a remote availability and performance monitoring transaction acts just like a customer doing a specifically defined activity (such as checking an account balance, reporting a power outage, etc.). It verifies at each step that the system is saying exactly what is expected and responding to the end-user’s inputs within established response time thresholds. It is accessing and interacting with the self-service system. By doing so it is literally monitoring the availability and performance of that system. If the test call process determines the system is not saying what it is supposed to at any step, or taking too long to respond to end-user inputs, notifications alert someone specifically designated to assess the severity of the issue and to deal with it.

So there you have it – Agent Monitoring, Voice Quality Monitoring, and Remote Availability and Performance Monitoring – side-by-side.

Mike Burke







www.iq-services.com
6601 Lyndale Ave South, #330
Minneapolis, MN 55423

Monday, March 29, 2010

Poll Results and Filling the Gap between Internal Metrics and Customer Experience

Last week I had the opportunity to present a webinar called “Internal Monitoring Isn’t Enough.” It was an opportunity for me to educate attendees about a critical gap many companies haven’t bridged between internal performance stats and the true picture of how their contact center and communications solutions are performing – in other words, the customer experience. The simple method (or in our case, the cost-effective service) we discussed for filling this gap is Remote Availability & Performance Monitoring (RAPM)*. As you all know, polls in webinars are good, so we included several in our webinar to get a sense of the make-up and experiences of our audience. Not surprisingly, more than 60% of the attendees identified themselves as IT or service providers.

Much more interesting were the responses to our poll questions about what people are doing today to find out if their systems are working.

* Over 60% experienced outages lasting 2 or more hours
* 30% reported outages going undetected for more than a day
* Only 1 in 3 attendees relied on some kind of monitoring at all to learn about customer-impacting technology issues
* Another 13% get performance news weekly when they review performance reports or have no idea at all

Most shocking of all to us (despite the fact that we sell services to address this problem) was the following poll results:

* More than 55% of the respondents indicated they count on call center agents or customer complaints to tell them about customer-impacting technology issues!!!

Ultimately it appeared that 68% respondents were not proactively trying to determine how communications technologies impact customers.

Can you imagine trying to drive a car that’s low on oil for 2 hours before the Check Engine light comes on? Or driving with a flat tire for more than a day?

You wouldn’t do these things intentionally unless you thought it was too hard or you didn’t have an alternative to fix the issue. Similarly, no one would intentionally irritate customers (and ultimately the agents who talk with customers) hoping their complaints would provide enough information to help resolve a technology issue.

But off course that’s why we held the webinar in the first place…to let folks know there is a simple and effective way to sample and report upon customer experience that also lets you know exactly what’s going on so you can ACT when it matters.

Mike Burke

Friday, March 12, 2010

You Are Here

Last week, we were talking with a few people from a well-known research institute. We were just introducing ourselves and one of the women started talking about quality and service assurance issues for the contact center. She talked about the value of bottoms up and top down metrics for the contact center. But she also talked about a lingering gap that still remains when it comes to the ability to associate this plethora of great information with what is really happening to customers at a given moment. How do you effectively and simply put all that info into context? There are so many cures out there like speech analytics and surveys. And yet the cures sometimes come with more obstacles to overcome like: high costs, information overload, disassociation from actual transactions, lack of real time data and the possibility that the cure could itself impact the customer experience (e.g., a survey).

That’s where we got to jump in and say “Thank you for making our sales pitch for us.” She was effectively describing the gap our services fill.

Do you remember the first time you went to a mega mall or foreign airport? Do you remember how relieved you were when you found the map with the little yellow “You are here” arrow? That’s what remote availability and performance monitoring does for people navigating the megamall of contact center performance technologies and metrics. It offers the simple, straightforward view of integrated technology performance without impacting an actual customer. By monitoring the steps of a real end-user interaction, it lets you know when something goes wrong in the interaction and where to look for the problem. It provides that little yellow arrow saying “start here.” In the whirlwind of data (e.g., AHT, BHCC, abandoned calls, etc. etc.), remote availability and performance monitoring provides context so your data isn’t just data, it is actionable information.

Want to learn more? Join us for a live webinar on Tuesday, March 16 at 1 PM Central or download the recording whenever the time is right for you: REGISTER HERE.

Marla Geary

www.iq-services.com
6601 Lyndale Ave South, #330
Minneapolis, MN 55423