Resources / FogBlog

January 18, 2017

Top 10 myths of fog computing

Tags:


Like any emerging technology, fog computing comes with its share of hype and misperceptions.  To set the record straight, the OpenFog Consortium takes on the top 10 myths of fog computing.

 

Myth #1. Fog and edge are the same thing.

Fog computing and edge computing are closely related concepts, but they are not synonyms. The difference is that conceptually fog computing involves dynamic pooling of resources and data sources between many devices that reside between the cloud and things, while edge computing statically pre-provisions a single device somewhere at the edge to perform pre-defined functions.The fog layer also spans a hierarchy not only the network perimeter but all areas within the cloud-to-things continuum. Fog nodes can be deployed anywhere in this range, including at the edge or on the things. In a fog architecture, computing and control are carried out at or near the end-user device, as opposed to being stored in remote data centers or cellular core networks. Fog computing always uses edge computing, but not the other way round. In addition, fog is inclusive of cloud, while edge excludes the cloud.

Myth #2. Fog is a replacement for cloud.

Fog does not replace cloud computing. Rather, the fog architecture is designed from the ground up to augment and to complement the cloud by bringing services typically associated with the cloud closer to the things. Nor is fog just a mini-cloud: while fog can have a smaller footprint than cloud and will deliver many of the same computing services, it is fundamentally different from cloud. The most notable difference is that fog doesn’t just implement small clouds at network edges – it enables computing anywhere in the cloud-to-things continuum.

With fog, distributed resources are integrated in the cloud, at the fog layer, and also on the things. These cloud/fog unified services include not just computing but also communication capabilities, storage, acceleration, and control functions. The size of the fog resource is also flexible – it can scale from one small fog node to a hierarchial network full of nodes, depending upon the needs of the applications.

Myth #3. Fog is just a new name for existing architectures.

Fog is not a rebranding of legacy technology. To the contrary, fog is a new and clearly differentiated architecture that will be required to get the most out of emerging applications, notably IoT, embedded AI, 5G and deep analytics. A new, standards-based fog architecture based on 8 foundational elements is being created under the aegis of the OpenFog Consortium.

Myth #4. Fog is costly.

Some think fog will be a costly technology because it adds a layer into enterprise network architectures. The fog architecture will actually make operations more efficient and less costly by solving the bandwidth, latency, reliability and communications challenges associated with next generation networks. The fog layer is simply a conceptual system-level, horizontal architecture that models how resources and services such as computing and control, storage, networking, aceleration and communications are distributed and delivered closer to the data sources / actuators (the “things”). The fog layer is necessary to meet the requirements of many emerging applications in IoT, AI and 5G, and can do so with a lower total lifecycle cost of ownership than achievable with alternative network architectures.

 

Myth #5. Fog nodes are constrained devices.

Fog nodes are functionally rich and architected to support the significant compute, storage and memory resources that are necessary to run advanced digital applications. Some fog nodes are able to support virtualization, multi-tenancy and run multiple applications concurrently and can be clustered together to increase resources as needed for a given applications, or produce fault tolerant architectures. Fog nodes that are endowed with multiple cores, several GBs of memory, and TBs of storage are expected to become commonplace.

 

Myth #6. Fog creates new silos.

Since fog can eliminate physical silos, some believe that fog will lead to virtualized silos, thereby increasing network complexity and causing new headaches. In practice, the fog architecture allows role-based access control along with data sharing policies in order to create data workflows across processes that were previously siloed. The open fog computing architecture enables unified management of cloud, network, and fog, thereby providing a single pane of glass to manage services that were previously siloed. The isolation between virtualized domains (tenants) at the management and data plane levels can be controlled and adjusted according to the use case needs.

 

Myth #7. Fog is only applicable to wireless environments.

Fog works over wireless and wireline networks and also inside these networks. For instance, fog runs inside radio access networks to allow radio network control functions to be smartly distributed, and to allow user applications to be distributed to inside radio access networks. While fog is extremely useful in both licensed and unlicensed wireless environments, there are already many real-world situations where fog is used with wireline connectivity as well as a rich combination of wireline / optical and wireless connectivity. For example, fog is expected to grow dramatically in the Industrial Internet of Things (IIoT) arena, where industrial elements based on wired SCADA systems, OPC-UA interfaces, Modbus, and so on, will be connected to fog nodes.

 

Myth #8. Fog is only useful where latency to the cloud is unacceptable.

While it’s true that a primary benefit of fog computing is its ability reduce latency,the drivers for fog go far beyond pure latency issues. These include a variety of operational, regulatory, business, and reliability issues. Critical services can be enabled by fog and operated autonomously from the cloud, the perimeter, or a variety of points in the network. Fog is equally advantageous for areas where network connectivity can be unreliable due to weather or other conditions. It also significantly reduces network bandwidth loads through its proximity closer to where the data is generated. Fog can be valuable to enhance IoT network security and privacy, especially in situations where the endpoint things don’t have the resources for strong cryptography. With fog, operational and business policies can be applied to the data, enabling data to be more efficiently processed and analyzed on premise.

 

Myth #9. Fog is far off in the future.

Fog computing is here today. Industry leaders and a rapidly growing ecosystem of startups have already begun developing and deploying early-stage fog products and services, helped by leading research from universities across the world. End users such as the city of Barcelona [hotlink] are starting to complete pilot deployments, and are moving into full scale production. The OpenFog reference architecture will be available in early 2017, at which point formal standards will be produced by a Standards Development Organization. Check out real-world use cases for some early examples.

 

Myth #10. Fog will not be necessary in the future.

This, of course, depends on how far in the future we are talking about! Within the limits of the speed of light, network bandwidth will continue to increase, and latency will continue to decrease. For the forseeable future, there will continue to be requirements for bringing more resources and more intelligence closer to the endpoints and the users, and operate these with ever-increasing performance. History has shown the demand of computing always outpaces the supporting resources. Fog was developed from the ground up to specifically address this – and related – issues, so there will be no exception in the case of fog.

 

Want to set the record straight on fog?  We welcome direct conversation.  Contact us at info@openfogconsortium.org to get started.