Wednesday, April 7, 2010

Monitoring! Monitoring! Monitoring!

Whenever we start talking about our remote availability and performance monitoring services (HeartBeat™), like I did just a few weeks ago during a monitoring webinar, there is invariably a little confusion at the outset about the differences between remote availability and performance monitoring, call recording/agent quality monitoring, and voice quality monitoring. So I thought it might be helpful and interesting to define these different monitoring methods in a blog post. And I promise to try to do it without talking like an engineer (too much). So here goes…

Call Recording/Agent Monitoring/Quality Monitoring refers to applications that record and capture data from conversations between customers and call center agents including tracking the agent desktop transcript during customer interactions. The application allows supervisors to playback an agent-customer conversation and at the same time see exactly what the agent was doing during the call. Some of these applications involve speech analytics to assist with call trends analysis. This information allows contact center supervisors to assess agent performance and business rules in terms of efficiency and appropriateness, as well as to coach performance and offer self-paced training to agents who need it.

Voice Quality Monitoring in the IP Telephony world is typically an automatically generated mathematical assessment or grade that asserts how good a call sounded and how it would be rated by people if they were listening to the call at the same point the measurement is taken. There are different ways to calculate Voice Quality; I’ll try to provide a few simple definitions of a few of the common methods:

  • PESQ asserts what listeners would think based on the technology involved in the process and the measured performance of the network carrying the traffic.
  • QoS is a way to look at and rate multiple characteristics about a call so ultimately you can try to improve on each of the characteristics until the quality is satisfactory.
  • MOS is literally a subjective assessment by a bunch of people in a room voting on how good a call sounds.
  • R Factor is a number or score that tries to quantify the subjective MOS assessment made by a bunch of people in a room.

Suffice it to say that a sniffer (or router) that watches voice samples and packets going by can determine how much packet loss, jitter, and delay is happening at the network segment where it’s inspecting the packets. It can also score the call, and report the score on an instantaneous or call by call basis. That’s voice quality.

The servers that make up an IP telephony implementation, whether contact center or unified communications, are constantly monitoring and reporting on voice quality and notifying staff in some fashion if voice quality gets out of an acceptable range. It’s really important to note that in an IP world, voice quality can be really great in one spot, and really lousy somewhere else. And depending on how the path is stitched together, the voice quality numbers reported can be 100% accurate but misleading, i.e., the audio could be completely unintelligible for some reason, but the network could be carrying that garbled audio perfectly, resulting in a perfect score. On the other hand, the network may distort the audio somewhat due to packet loss or jitter, resulting in a less than perfect calculated score, yet the message could easily be intelligible by a real person. Fun, huh?

Remote Availability & Performance Monitoring is an external monitoring method that periodically calls or interacts with self-service customer facing solutions to ensure they are available and performing as expected. It is external because the transaction is generated outside the system being monitored just like a real end-user transaction.

Let’s step back a minute. Think about how a contact center is put together. Now overlay the sequence of interactions a caller has with the self-service or communications technology and all its supporting functionality (e.g., switching, routing & hunting, speech reco and text-to-speech technologies, data access & retrieval methods, CTI screen pop, etc.). Now associate each step of a typical telephone call with a unique part of the contact center’s self-service infrastructure including CTI and routing processes required to transfer a call to an agent as well. Because each test call follows a carefully defined script from the time the equipment goes off-hook and dials all the way through the end of the call, a remote availability and performance monitoring transaction acts just like a customer doing a specifically defined activity (such as checking an account balance, reporting a power outage, etc.). It verifies at each step that the system is saying exactly what is expected and responding to the end-user’s inputs within established response time thresholds. It is accessing and interacting with the self-service system. By doing so it is literally monitoring the availability and performance of that system. If the test call process determines the system is not saying what it is supposed to at any step, or taking too long to respond to end-user inputs, notifications alert someone specifically designated to assess the severity of the issue and to deal with it.

So there you have it – Agent Monitoring, Voice Quality Monitoring, and Remote Availability and Performance Monitoring – side-by-side.

Mike Burke







www.iq-services.com
6601 Lyndale Ave South, #330
Minneapolis, MN 55423