Sponsored By

From the Cloud to the FogFrom the Cloud to the Fog

Just when you were reconciled to the trend of centralization in the Cloud, a decentralizing Fog rolls in.

Eric Krapf

May 19, 2014

4 Min Read
No Jitter logo in a gray background | No Jitter

Just when you were reconciled to the trend of centralization in the Cloud, a decentralizing Fog rolls in.

You knew it had to happen: On the heels of "cloud computing" comes "fog computing."

Just when you were getting used to the idea of centralizing resources in the Cloud, now the idea of "fog computing" says you can and should enable computing at distributed resources, out at the most remote edges of your network infrastructure. Cisco has been the main proponent of "fog computing," a model that fits neatly with one of Cisco's mega-trends, the Internet of Everything, which relies on massive numbers of widely distributed sensors and other endstation-based resources, all spread out to the very edges of the Internet, in a "fog" of resources.

Cisco introduced the idea of "fog computing" earlier this year, with the rollout of IOx, an application enablement framework that Cisco's Roberto de la Mora described as, "an intermediate layer between the 'things' and the cloud."

"Cisco is combining the communication and computing resources that are required for IoT into a single platform for application enablement at the network edge," de la Mora wrote. In practice, what this means is that monitoring or control applications for the Internet of Everything can run on Cisco routers in remote locations, rather than requiring the data to be backhauled to a datacenter in, yes, the Cloud.

It's fitting that this latest buzzword got its biggest play to date here at Cisco Live in the often-foggy San Francisco. In his Cisco Live keynote, CEO John Chambers discussed some use cases that illustrate this view of networked computing.

The speech's overarching hypothetical use case, meant to illustrate the range of technologies that Cisco Live spotlights, took the example of a railroad operation that needs to maintain track, cars, and freight; monitor stations and other operations; and deal with passengers as customers seeking to do business on a consumer level. One of the aspects of this hypothetical scenario was the ability to put ruggedized routers in pedestals at the sides of tracks, and elsewhere in the far-flung railroad infrastructure, and using high-end IOx-based processing to crunch the data right in those pieces of equipment, rather than backhauling it to a centralized application in a datacenter.

For a real-world discussion of the value of this capability, Chambers brought out Alan Matula, executive VP and CIO of Royal Dutch Shell, who said the energy company was looking to use such capability in the vastly-distributed world of oil drilling. He said the business case for such an implementation is especially compelling in the new world of shale oil exploration, where companies drill not a handful of wells, as in traditional exploration, but hundreds or even thousands of wells at a time. In this situation, scale, speed, and quality are crucial, so IT infrastructure must support those attributes, and supporting them directly at the edge may be the most effective way to do so.

Though fog computing is not specifically tied to voice, video, or even the more nebulous idea of "collaboration," I think the basic concept does strike a chord in the world of enterprise communications. For one thing, the "centralized/distributed" pendulum is constantly moving back and forth. And we've seen this dynamic in the early days of VoIP, long before the Internet of Everything.

When Cisco put Survivable Remote Telphony in edge routers for branch offices, and their peers in the VoIP industry followed suit, they were creating a "fog" of converged communications and networks. At the time, of course, networks of premises-based TDM PBXs were the original "fog" communications--but once VoIP emerged, the destination for this application has seemed to be to head into the datacenter, as enterprise applications and resources were consolidated.

And yet.... As soon as an application or function makes it into the datacenter, it seems like someone comes up with a reason why it should be distributed toward the edges. And even more than the relatively light, not-necessarily-realtime IoE transmission, voice and video quality can only suffer if subjected to excessive backhaul requirements and remote processing.

"Fog computing" is still a new buzzword. It won't displace the Cloud model any time soon. But it's a term I think you'll start to hear--maybe like a foghorn in the distance, maybe getting more insistent as the Internet of Things starts to become more of a daily reality for your enterprise. Maybe if voice and video quality get worse over time, you'll hear that foghorn sooner than you think.

Follow Eric Krapf and No Jitter on Twitter and Google+!
@nojitter
@EricHKrapf
Eric Krapf on Google+

About the Author

Eric Krapf

Eric Krapf is General Manager and Program Co-Chair for Enterprise Connect, the leading conference/exhibition and online events brand in the enterprise communications industry. He has been Enterprise Connect.s Program Co-Chair for over a decade. He is also publisher of No Jitter, the Enterprise Connect community.s daily news and analysis website.
 

Eric served as editor of No Jitter from its founding in 2007 until taking over as publisher in 2015. From 1996 to 2004, Eric was managing editor of Business Communications Review (BCR) magazine, and from 2004 to 2007, he was the magazine's editor. BCR was a highly respected journal of the business technology and communications industry.
 

Before coming to BCR, he was managing editor and senior editor of America's Network magazine, covering the public telecommunications industry. Prior to working in high-tech journalism, he was a reporter and editor at newspapers in Connecticut and Texas.