Sponsored By

Sophisticated Skype: What Makes a Good Call?Sophisticated Skype: What Makes a Good Call?

Ultimately, a communication experience is good if a user says it is, making Rate My Call data a good starting point for analysis.

Kevin Kieller

June 4, 2018

5 Min Read
No Jitter logo in a gray background | No Jitter

Everyone who has deployed Skype for Business hopes their users have good calls. But what's good?

Many rely on audio quality to determine if a call was good. Most often this is based on the measure of mean opinion score (MOS). MOS is usually measured on a five-point scale, from Bad (1) to 5 (Excellent). Originally MOS was calculated based on subjective ratings gathered directly from users. Impractical as conducting subjective tests of voice quality for a production communication system is, MOS is more typically estimated algorithmically.

Microsoft's Skype for Business tracks MOS for several measures, broadly categorized as either "Listening Quality" (MOS-LQ) or "Conversation Quality" (MOS-CQ). Most VoIP vendors report MOS-LQ, which only measures the quality of the audio stream and doesn't take factors such as delay or echo into account. MOS-CQ considers bidirectional effects, accounting for listening quality in both directions.

Specifically, Skype for Business reports on:

  • Listening MOS (LQ) -- the predicted wideband listening quality MOS for the audio played to the user

  • Sending MOS (LQ) -- the predicted wideband MOS for the audio sent from the user

  • Network MOS (LQ) -- the predicted wideband MOS for audio played to the user only based on network factors such as packet loss, jitter, packet errors, packet reordering, and codec in use. Because the Network MOS (NMOS) is constrained by the codec used, achieving a 5.0 NMOS is impossible

  • Conversational MOS (CQ) -- the predicted narrowband conversational quality of the audio stream played to the user. Conversational MOS takes all the factors of Listening MOS into account and additionally considers echo, network delay, jitter buffering, and device-imposed delays

For many of the MOS scores, a best practice is to look at trends (is value moving up or down?) as opposed to focusing on achieving a specific target value.

In contrast to user-calculated MOS scores, the Skype for Business Rate My Call feature allows the capture of direct user feedback (see my related No Jitter post).

However, as our research has shown, the challenge is that predicted MOS-LQ scores don't correlate strongly with user quality ratings for calls. For instance, below is an analysis of MOS versus Rate My Call feedback across 10 different organizations representing a population of more than 100,000 users. Clearly, an increase in MOS doesn't correlate with a reliable increase in a user rating for the call.

(Note, we're currently researching correlations between the Conversation MOS and user ratings.)

Continue to Page 2: Beyond MOS, and more

Continued from Page 1

Beyond MOS
Beyond MOS scores, Skype for Business flags calls suspected to be poor based on five factors. A call qualifies as poor if:

  • PacketLossRate > 10% or

  • DegradationAvg > 1.0 or

  • RoundTrip > 500 or

  • JitterInterArrival > 30 or

  • RatioConcealedSamplesAvg > 7%

To emphasize, exceeding any one of these thresholds sets the poor-quality call flag. Microsoft uses this same poor call definition in the online Call Quality Dashboard (CQD) and the on-premises Call Quality Methodology (CQM) Scorecard tools.

In our research, calls flagged as poor in certain circumstances loosely correlate with user ratings in some circumstances. For instance, 20% of external calls routed over a VPN connection (no split-tunneling) were flagged as poor, compared to 5% of external calls with split-tunneling in use. Correspondingly, users rated calls routed over the VPN on average 0.3 stars (one third of a star) lower.

So What Makes a Good Call?
Based on our research, no single factor ensures users find calls acceptable. Conversely, many factors, some technical and others behavioral, can cause users to give calls poor ratings. These include:

  • Network bandwidth and latency

  • Not using split-tunneling for external calls

  • Processing power of end-user devices -- e.g., users with more powerful laptops, faster CPUs, and more CPU cores generally have higher-rated communication experiences

  • End-user devices (headsets); using an approved headset yielded user ratings minimally above ratings when using built-in laptop microphones or audio devices plugged into the audio jack -- we've been surprised at how minimal of a difference

  • Conference join times -- how long a user waits before being admitted to a conference call; the Monitoring Server Conference Join Time report provides this detail

  • Environmental factors -- approximately 20% of calls rated fewer than three stars by users because of background noise on the call; this is mostly a behavioral issue, although noise cancelling headsets might diminish background noise

More perplexing, the correlation between ratings of users who participated on the same conference call is weak. That is, one user might rate the call quality as five stars (very good), while another user on the same call rates it as three stars (fair). We believe expectations play a strong role in a user's interpretation of Skype for Business call quality. While most users have been trained to accept as normal cellular calls dropping while in an elevator or when driving into an underground parking lot, perhaps the relentless UC industry chant of "any device, anywhere" has inflated expectations for UC calls, even in compromised environments.

Ultimately, a communication experience is good if a user declares it so. As such, analysis of what makes a call good should start with Rate My Call data and then look for positive factors that correlate strongly for highly rated calls (an average user score of 3.5 or higher) or negative factors that correlate strongly with poorly rated calls (an average user score of 2.5 or lower). In some cases, simply talking with users who consistently provide low scores causes their scores to increase. Repeated one-star call ratings might end up being a protest vote against change or a method to seek more attention from IT, and talking directly to your end users is always a good call!

The EnableUC Skype Insights service provides customized analysis and recommendations based on your specific data. Quantified metrics help ensure you're delivering a robust, high-quality, and reliable service that's being well adopted and can drive continuous improvement. If you have specific reporting questions please comment below, send me a tweet @kkieller, or message me on LinkedIn.

About the Author

Kevin Kieller

Kevin Kieller is a globally recognized Unified Communications, Collaboration and technology analyst, strategist, and implementation leader. He is part analyst and part consultant, which ensures he understands both the "big picture" and the real-world realities.

Kevin and the team he created helps organizations select and successfully implement leading collaboration, communication and cloud technologies, focusing on delivering positive business outcomes. He helps vendors generate awareness and demand, position their products, often leveraging his unique understanding of the Microsoft ecosystem.

Kevin leads the elite BC Strategies Expert group and is part of the No Jitter technical analyst team where he covers Microsoft Teams, Copilot, UC, Collaboration, and AI for productivity. He presents regularly at Enterprise Connect and keynotes many other events focused on technology effectiveness.

He has led the development of many technology strategies for medium and large organizations, served as Bell Canada's lead UC strategist, developed new practice offerings for Softchoice, and advised hardware and software companies interested in expanding within, or competing against, the Microsoft ecosystem.

Kevin is comfortable interfacing at both the most senior (CxO) levels and getting "his hands dirty" helping technical teams.

Kevin has conceived, designed and overseen the development of software products and cloud-based services in the business, educational and recreational areas which have been used by millions of people in over 17 countries worldwide. A long time ago he created an award-winning game for the Commodore 64 and ever since has been committed to delivering business value through technology.