Sponsored By

UC and Data Convergence Increasing the Need for QoSUC and Data Convergence Increasing the Need for QoS

Data, voice and video are very different types of applications, which is why QoS is necessary.

Greg Wolf

April 12, 2012

3 Min Read
No Jitter logo in a gray background | No Jitter

Data, voice and video are very different types of applications, which is why QoS is necessary.

Quality of Service (QoS) is not a new concept or technology. However, while attending Enterprise Connect 2012 a few weeks ago I observed a number of UC technologies that renewed my respect for its importance. Although UC may not be widely adopted yet, there is evidence that it will continue to become more pervasive with time. As part of that continued trend, QoS will need to be an integral part of any enterprise network with serious aspirations of integrating Unified Communications (UC) applications and data applications on the same network. An alternative to QoS, albeit costly and unwise, is to over-provision the network with huge amounts of bandwidth.

Over the past several months I have written a number of articles focused on the increasing amount of data consumed from streaming video, video surfing and specialized apps residing on smartphones and tablets. During Enterprise Connect I was able to see firsthand how far applications such as video conferencing, video-enabled communications, and collaboration have evolved, all of which contribute to the growing data consumption story. Another interesting technology is the evolving WebRTC (voice-enabled web browser) standard, which will introduce yet another application capable of consuming lots of data. Although understanding the amount of data being consumed is important, UC applications require a specific level of network performance in order to work properly.

Data, voice and video are very different types of applications, which is why QoS is necessary. Voice/video applications need a deterministic level of network performance, with guaranteed amounts of bandwidth, minimal jitter and packet loss. Data applications are much more forgiving and tolerant to the typical ebb and flow of network conditions. When a web page encounters congestion, packet loss, or even large amounts of latency, the underlying TCP/IP protocol is robust enough to recover. The user may not be happy with the response time but the session will remain alive and functional. This is not the case with voice/video applications; any factor such as congestion, packet loss, and jitter more than likely will render the application unusable.

QoS was born out of the necessity to allow applications with different performance characteristics to coexist on a common network. Before QoS emerged, conventional network design wisdom was to keep data, voice and video separated. Furthermore, once QoS is in place, it cannot be ignored; it needs to be monitored and tweaked in order to make sure it continually meets the needs of all of the applications. This article is not intended to be a QoS tutorial, which is why I am not going to weigh the merits of one QoS mechanism versus another. The key point I want to make is that QoS is not only a nice to have when integrating data, voice and video--it is a requirement. As UC makes further inroads into enterprises, network managers will need to start planning for the correct QoS implementation.

About the Author

Greg Wolf

Greg Wolf is a principal with NetForecast, helping to develop and implement performance engineering practices for enterprises. Greg has more than 28 years of experience in data networking, systems integration, and software engineering. Greg's areas of expertise include analyzing how applications perform on networks, network capacity planning, network and application impact studies, traffic engineering, network design, and data modeling using simulation. In addition, Greg has authored dozens of white papers and reports.

Greg is the inventor of a patented process and methodology for analyzing how applications perform on networks entitled Method and Apparatus for Computer Network Analysis. Greg founded a consulting practice called the Network Analysis Program, while at INFONET Services, and spent seven years leading OPNET Technologies' Enterprise Application Performance Management consulting practice.

Drawing upon years of direct client experience and in-depth technical knowledge, Greg continues to help companies develop and implement performance engineering practices in the area of application monitoring, troubleshooting and optimization.