SHARE



ABOUT THE AUTHOR


Terry Slattery
Terry Slattery, is a senior network engineer with decades of experience in the internetworking industry. Prior to joining Chesapeake NetCraftsmen as...
Read Full Bio >>
SHARE



Terry Slattery | August 07, 2018 |

 
   

Examining Emerging Network Protocols

Examining Emerging Network Protocols Will new network protocols improve the net? What will it take for any of them to gain acceptance?

Will new network protocols improve the net? What will it take for any of them to gain acceptance?

Do we really need new network protocols? The short answer is yes. Sure, the existing set of network protocols have worked surprisingly well, and attempts at improving them have frequently resulted in no improvement. However, some smart people have identified significant changes in fundamental protocols that can simplify networking. The real question is whether the changes are significant enough to justify their adoption.

If what we have is working so well, why do we need new protocols? Just examine the layers of functionality that are required to implement security, NAT, QoS, and content management. It gets complicated very quickly as the layers interact with one another. Some of the proposed protocols potentially result in network simplification. IPv6 was supposed to improve security, but further examination revealed some significant security holes, such as neighbor discovery and automatic tunneling.

The new protocols are:

  • Named Data Networking (NDN)
  • Recursive InterNetwork Architecture (RINA)
  • Enhanced IP
  • Easy IP (EZIP)

Named Data Networking
NDN is a relatively new network protocol that changes how our devices (computers, tablets, phones) retrieve the data that we want to view. It is a version of Content-Centric Networking. It retrieves data by name instead of identifying data based on the server on which it is located. For example, a popular online TV program could be referenced by its name instead of the servers on which it is hosted (i.e. netflix.com or amazon.com). While the program's name may still include the publisher's name, the data could be stored in multiple places on the Internet, making access faster. You can think of this as a scaled-up version of a content distribution network (CDN). Even better, different versions of the same data can have different names, allowing access to specific versions.

To understand more about how it works and why it is needed, I highly recommend watching A New Way to Look at Networking, a presentation by Van Jacobson, one of the premier network researchers in the world. An ACM interview with Van Jacobson fills in some more of the operational details.

Sorell Slaymaker's recent No Jitter blog, "IPv6 is Tactical, NDN is Strategic" suggests that NDN will play a different role than I envision. But regardless of the view, NDN fulfills an important role in the evolution of networking. To me, the combination of efficiency improvement and enhanced trustworthiness of the data are compelling arguments. A possible negative factor against NDN is the requirement to store state information about the possible location of named data (i.e. the Pending Interest Table, a feature that streamlines the process of finding content by name).

A first look suggests that NDN is only useful for retrieving static content like web pages and streaming media (both video and audio). In his ACM interview (see link above) Jacobson described using NDN to replace UDP in two laptops for the purpose of conducting an audio call. The two endpoints were known by names, so when the Ethernet interface was unplugged, the call was able to continue by discovering an alternate path between the systems.

The nice thing is that NDN runs over whatever infrastructure exists -- be that IPv4, IPv6, high-speed optical networks, or low-speed ad-hoc networks. All that's needed is to begin adding NDN routers to key parts of the Internet. We immediately begin to see the benefit as the data are distributed to locations closer to recipients. Slaymaker (see link above) reports that Gartner estimates that NDN is in the early phases of implementation and that it may take ten years to deploy. That's likely to be a reasonable figure, absent any reason to accelerate the process (i.e. congestion collapse of the Internet).

I expect NDN to be the most viable of all the options because it looks like a clean transition with minimal bumps in the road. Its advantage is being able to operate over any transport. An interesting set of experiments were done with it in which interactive video conferencing sessions were run (see NDN-RTC). The testing was very successful and identified several areas for additional investigation and refinement.

Recursive InterNetwork Architecture
The other new initiative is RINA, which is an effort to re-examine the fundamentals of networking and to simplify their operation. John Day, who was involved in the design of the Arpanet, examined the Internet protocol architecture in his book Patterns in Network Architecture. His analysis determined that network communications are really layers of the same function: inter-process communication (IPC). This observation results in great simplification. The same IPC mechanism can be used throughout the networking stack. The only requirement is to adjust the parameters to operate on different scopes.

RINA uses nested scopes, where the higher layers have larger scope than lower layers. A good explanation of RINA can be found here from The IRATI Project, and current work is being reported by the Pouzin Society.

The neat thing about RINA is that it addresses the need for functions like security, QoS, NAT, mobility, and multi-homing, without the need for extra features and protocols. My reading of the protocol found an interesting factoid. In 1982, Richard Watson proved that reliable transport only requires three timers that are based on the packet's maximum lifetime (the Delta-T protocol). The start and end of a connection are not needed, so we can dispense with the SYN and FIN parts of a TCP connection. This would improve the speed of reliable connection setup, which is a big factor on web pages that want to access many different components from multiple servers. The implication is that no SYN attacks are possible.

I've written before about the need to simplify the plethora of network protocols that are required to build functional networks. Every network layer in the existing TCP/IP model has one or more unique protocols that must be properly configured and operating in order to support the applications, and ultimately the business that depends on the applications. Whenever we discover a new network problem to solve, we address it by creating a new mechanism and a protocol to distribute the protocol data.

There are several organizations working on RINA (see the links above), but I rarely see any mention of it in general network reading. My reading tends to be more focused on operational things instead of research, so there's some amount of self-selection involved.

I expect RINA to take many more years to be accepted, if it is accepted at all. The number of devices that use an IP stack would be a big impediment. We're already in a transition to IPv6, so executives will want to understand why we need to make a similar move to RINA. Whether or not it's worth the effort will depend on how well we can solve the problems that RINA addresses (QoS, NAT, security, mobility). Network security in particular will be a key factor. Until then, we'll continue to work with the set of mechanisms that we've developed to address the problems of TCP/IP.

Enhanced IP
Enhanced IP (IETF doc) is more of an address expansion mechanism than it is a new protocol. It uses IP Option 26 to double the size of the IPv4 address space. Existing network equipment can easily handle the enhanced IP packets, except where forwarding packets containing options is explicitly denied. The result is 64bit IP addresses. Systems that do NAT will have to be patched to handle the conversion.

Enhanced IP proponents claim that the IP code modifications are about 400 lines in length and are 100 times smaller than the code to do IPv6. I've not seen a security analysis of Enhanced IP. I would like to see several security studies to make sure that it doesn't expose something that the proponents haven't considered.

Enhanced IP reminds me of TUBA (so-called TCP/UDP Using Big Addresses), which was one of the alternatives to the current IPv6 suite. The main objective of both Enhanced IP and TUBA was larger address space.

The downside to Enhanced IP is that it also requires a modification to the IP stack in all end nodes. Proponents claim that the change is minimal and can be easily handled with loadable kernel modules or small patches. Getting these changes into the corporate computing world will be challenging, much less the work required to outfit all the other active personal computers on the Internet.

Easy IP (EZIP)
Easy IP (slides) is also an address extension mechanism. It uses the existing IPv4 reserved address space of 240/4 and a set of Semi-Public Routers (SPR) to increase the address space. Conceptually, the addressing works similarly to local dialing in Central Office Exchange (CENTREX) telephone systems. The local numbers behind the SPR are not visible across the Internet. Global routing is only done on the 240/4 addresses.

EZIP has the advantage of only needing the deployment of SPRs to handle the 240/4 address space. However, there are some network devices that don't expect to see packets from this address space and will discard them. These devices would have to be upgraded too. If all we need is more address space, this protocol might be the simplest to deploy.

Summary
All four of these protocols are in active development. The problem as I see it is making the required changes to the impacted systems around the world. The solution doesn't have to encompass every single legacy endpoint, but any viable solution should work for a reasonable percentage of the global IPv4 systems.

Will any of these really reach widespread adoption? It depends on the deployment of the enabling infrastructure. If the endpoint vendors (Microsoft, RedHat, Apple, Android, Linux, etc.) and network equipment vendors (Cisco, Juniper, Arista, etc.) provide the necessary functionality, then we have a chance of making something happen. In any case, it will be years before any of these solutions develop the critical mass necessary for widespread adoption.

Related content:





COMMENTS




August 29, 2018

Moving your voice services to the cloud introduces new challenges for 9-1-1 services. These include the need to serve multiple locations, and the increased mobility that comes with having a phone t

August 8, 2018

Artificial intelligence (AI) is becoming a reality for your contact center. But to turn the promise of AI into practical reality, there are a couple of prerequisites: Moving to the cloud and integr

June 20, 2018

Your enterprise may have adopted SIP Trunks, but are you up to date on how the latest technology is driving evolution in approaches?

In this webinar, youll learn how the new generation of SI

March 12, 2018
An effective E-911 implementation doesn't just happen; it takes a solid strategy. Tune in for tips from IT expert Irwin Lazar, of Nemertes Research.
March 9, 2018
IT consultant Steve Leaden lays out the whys and how-tos of getting the green light for your convergence strategy.
March 7, 2018
In advance of his speech tech tutorial at EC18, communications analyst Jon Arnold explores what voice means in a post-PBX world.
February 28, 2018
Voice engagement isn't about a simple phone call any longer, but rather a conversational experience that crosses from one channel to the next, as Daniel Hong, a VP and research director with Forrester....
February 16, 2018
What trends and technologies should you be up on for your contact center? Sheila McGee-Smith, Contact Center & Customer Experience track chair for Enterprise Connect 2018, gives us the lowdown.
February 9, 2018
Melanie Turek, VP of connected work research at Frost & Sullivan, walks us through key components -- and sticking points -- of customer-oriented digital transformation projects.
February 2, 2018
UC consultant Marty Parker has crunched lots of numbers evaluating UC options; tune in for what he's learned and tips for your own analysis.
January 26, 2018
Don't miss out on the fun! Organizer Alan Quayle shares details of his pre-Enterprise Connect hackathon, TADHack-mini '18, showcasing programmable communications.
December 20, 2017
Kevin Kieller, partner with enableUC, provides advice on how to move forward with your Skype for Business and Teams deployments.
December 20, 2017
Zeus Kerravala, principal analyst with ZK Research, shares his perspective on artificial intelligence and the future of team collaboration.
December 20, 2017
Delanda Coleman, Microsoft senior marketing manager, explains the Teams vision and shares use case examples.
November 30, 2017
With a ruling on the FCC's proposed order to dismantle the Open Internet Order expected this month, communications technology attorney Martha Buyer walks us through what's at stake.
October 23, 2017
Wondering which Office 365 collaboration tool to use when? Get quick pointers from CBT Nuggets instructor Simona Millham.
September 22, 2017
In this podcast, we explore the future of work with Robert Brown, AVP of the Cognizant Center for the Future of Work, who helps us answer the question, "What do we do when machines do everything?"
September 8, 2017
Greg Collins, a technology analyst and strategist with Exact Ventures, delivers a status report on 5G implementation plans and tells enterprises why they shouldn't wait to move ahead on potential use ....
August 25, 2017
Find out what business considerations are driving the SIP trunking market today, and learn a bit about how satisfied enterprises are with their providers. We talk with John Malone, president of The Ea....
August 16, 2017
World Vision U.S. is finding lots of goodness in RingCentral's cloud communications service, but as Randy Boyd, infrastructure architect at the global humanitarian nonprofit, tells us, he and his team....
August 11, 2017
Alicia Gee, director of unified communications at Sutter Physician Services, oversees the technical team supporting a 1,000-agent contact center running on Genesys PureConnect. She catches us up on th....
August 4, 2017
Andrew Prokop, communications evangelist with Arrow Systems Integration, has lately been working on integrating enterprise communications into Internet of Things ecosystems. He shares examples and off....
July 27, 2017
Industry watcher Elka Popova, a Frost & Sullivan program director, shares her perspective on this acquisition, discussing Mitel's market positioning, why the move makes sense, and more.
July 14, 2017
Lantre Barr, founder and CEO of Blacc Spot Media, urges any enterprise that's been on the fence about integrating real-time communications into business workflows to jump off and get started. Tune and....
June 28, 2017
Communications expert Tsahi Levent-Levi, author of the popular BlogGeek.me blog, keeps a running tally and comprehensive overview of communications platform-as-a-service offerings in his "Choosing a W....
June 9, 2017
If you think telecom expense management applies to nothing more than business phone lines, think again. Hyoun Park, founder and principal investigator with technology advisory Amalgam Insights, tells ....
June 2, 2017
Enterprises strategizing on mobility today, including for internal collaboration, don't have the luxury of learning as they go. Tony Rizzo, enterprise mobility specialist with Blue Hill Research, expl....
May 24, 2017
Mark Winther, head of IDC's global telecom consulting practice, gives us his take on how CPaaS providers evolve beyond the basic building blocks and address maturing enterprise needs.
May 18, 2017
Diane Myers, senior research director at IHS Markit, walks us through her 2017 UC-as-a-service report... and shares what might be to come in 2018.
April 28, 2017
Change isn't easy, but it is necessary. Tune in for advice and perspective from Zeus Kerravala, co-author of a "Digital Transformation for Dummies" special edition.
April 20, 2017
Robin Gareiss, president of Nemertes Research, shares insight gleaned from the firm's 12th annual UCC Total Cost of Operations study.
March 23, 2017
Tim Banting, of Current Analysis, gives us a peek into what the next three years will bring in advance of his Enterprise Connect session exploring the question: Will there be a new model for enterpris....
March 15, 2017
Andrew Prokop, communications evangelist with Arrow Systems Integration, discusses the evolving role of the all-important session border controller.
March 9, 2017
Organizer Alan Quayle gives us the lowdown on programmable communications and all you need to know about participating in this pre-Enterprise Connect hackathon.
March 3, 2017
From protecting against new vulnerabilities to keeping security assessments up to date, security consultant Mark Collier shares tips on how best to protect your UC systems.
February 24, 2017
UC analyst Blair Pleasant sorts through the myriad cloud architectural models underlying UCaaS and CCaaS offerings, and explains why knowing the differences matter.
February 17, 2017
From the most basics of basics to the hidden gotchas, UC consultant Melissa Swartz helps demystify the complex world of SIP trunking.
February 7, 2017
UC&C consultant Kevin Kieller, a partner at enableUC, shares pointers for making the right architectural choices for your Skype for Business deployment.
February 1, 2017
Elka Popova, a Frost & Sullivan program director, shares a status report on the UCaaS market today and offers her perspective on what large enterprises need before committing to UC in the cloud.
January 26, 2017
Andrew Davis, co-founder of Wainhouse Research and chair of the Video track at Enterprise Connect 2017, sorts through the myriad cloud video service options and shares how to tell if your choice is en....
January 23, 2017
Sheila McGee-Smith, Contact Center/Customer Experience track chair for Enterprise Connect 2017, tells us what we need to know about the role cloud software is playing in contact centers today.