What Is Fog Computing? Components, Examples, and Best Practices

essidsolutions

Fog computing is defined as a decentralized infrastructure that places storage and processing components at the edge of the cloud, where data sources such as application users and sensors exist. This article explains fog computing, its components, and best practices for 2022 in detail.

What Is Fog Computing?

Fog computing is a decentralized infrastructure that places storage and processing components at the edge of the cloud, where data sources such as application users and sensors exist.

Fog Computing Architecture

According to Domo’s ninth annual ‘Data Never Sleeps’ infographic, 65% of the world’s population — around 5.17 billion people — had access to the internet in 2021. The amount of data consumed globally was 79 zettabytes, and this is projected to grow to over 180 zettabytes by 2025. The rapid growth of wireless technology has given mobile device users tremendous computing power. 

No matter the industry vertical, today’s enterprises see an outpouring of data from consumers. The internet of things (IoT) drives data-intensive customer experiences involving anything from smart electric grids to fitness trackers. Cloud computing and artificial intelligence allow for the dynamic processing and storage of these large amounts of data. This data enables organizations to make informed decisions and protect themselves from vulnerabilities at both, business and technological levels.

This data explosion has, however, left organizations questioning the quality and quantity of data that they store in the cloud. Cloud costs are notorious for escalating quickly, and sifting through petabytes of data makes real-time response difficult. 

Let’s consider the data sent by a temperature sensor in a factory line. The temperature recording can be pushed to the cloud every second with a service checking for fluctuations. But a more intelligent way of storing this information would be to check if there have been any temperature changes in the last few seconds. When a temperature change is noticed, the data is pushed to the cloud for storage to verify the proper operation of the production line. The temperature may take up little space, but this kind of scenario is also common with devices such as CCTV cameras that produce large video and audio data. 

This small storage and computation of data before sending it over to the cloud is fog computing. Fog computing involves the usage of devices with lower processing capabilities to share some of the cloud’s load. The goal of fog computing is to use the cloud only for long-term and resource-intensive analytics. These devices at the ‘edge’ of the cloud, i.e., where the organization’s system interacts with the outside world, take care of short-term and time-critical analytics such as fault alerts, alarm status, etc. 

Edge computing is a subset of fog computing that involves processing data right at the point of creation. Edge devices include routers, cameras, switches, embedded servers, sensors, and controllers. In edge computing, the data generated by these devices are stored and computed at the device itself, and the system doesn’t look at sharing this data with the cloud. 

Fog computing introduces a layer between edge devices and the cloud. This layer relies on a bunch of small computing servers that reside near the edge devices and not necessarily on the device itself. The servers are connected to each other and centralized cloud servers, enabling the intelligent flow of information. These small units work together to handle pre-processing of data, short-term storage, and rule-based real-time monitoring. The fog computing architecture reduces the amount of data transported through the system and improves overall efficiency. 

See More: What Is Edge Computing? Components, Examples, and Best Practices

Basic Components of Fog Computing

There are multiple ways of implementing a fog computing system. The common components across these architectures are explained below.

Fog Computing Components

1. Physical & virtual nodes (end devices)

End devices serve as the points of contact to the real world, be it application servers, edge routers, end devices such as mobile phones and smartwatches, or sensors. These devices are data generators and can span a large spectrum of technology. This means they may have varying storage and processing capacities and different underlying software and hardware.

2. Fog nodes

Fog nodes are independent devices that pick up the generated information. Fog nodes fall under three categories: fog devices, fog servers, and gateways. These devices store necessary data while fog servers also compute this data to decide the course of action. Fog devices are usually linked to fog servers. Fog gateways redirect the information between the various fog devices and servers. This layer is important because it governs the speed of processing and the flow of information. Setting up fog nodes requires knowledge of varied hardware configurations, the devices they directly control, and network connectivity.

3. Monitoring services

Monitoring services usually include application programming interfaces (APIs) that keep track of the system’s performance and resource availability. Monitoring systems ensure that all end devices and fog nodes are up and communication isn’t stalled. Sometimes, waiting for a node to free up may be more expensive than hitting the cloud server. The monitor takes care of such scenarios. Monitors can be used to audit the current system and predict future resource requirements based on usage.

4. Data processors

Data processors are programs that run on fog nodes. They filter, trim, and sometimes even reconstruct faulty data that flows from end devices. Data processors are in charge of deciding what to do with the data — whether it should be stored locally on a fog server or sent for long-term storage in the cloud. Information from varied sources is homogenized for easy transportation and communication by these processors. 

This is done by exposing a uniform and programmable interface to the other components in the system. Some processors are intelligent enough to fill the information based on historical data if one or more sensors fail. This prevents any kind of application failure.

5. Resource manager

Fog computing consists of independent nodes that must work in a synchronized manner. The resource manager allocates and deallocates resources to various nodes and schedules data transfer between nodes and the cloud. It also takes care of data backup, ensuring zero data loss. 

Since fog components take up some of the SLA commitments of the cloud, high availability is a must. The resource manager works with the monitor to determine when and where the demand is high. This ensures that there is no redundancy of data as well as fog servers.

6. Security tools

Since fog components directly interact with raw data sources, security must be built into the system even at the ground level. Encryption is a must since all communication tends to happen over wireless networks. End users directly ask the fog nodes for data in some cases. As such, user and access management is part of the security efforts in fog computing.

7. Applications

Applications provide actual services to end-users. They use the data provided by the fog computing system to provide quality service while ensuring cost-effectiveness. It is important to note that these components must be governed by an abstraction layer that exposes a common interface and a common set of protocols for communication. This is usually achieved using web services such as APIs.

See More: What Is IoT Device Management? Definition, Key Features, and Software

Examples and Use Cases of Fog Computing

While cloud computing has become all-pervasive, fog computing is just coming up to address the various latency issues that plague IoT devices. 

1. Smart homes

One of the most common fog computing use cases is a smart home. A smart home consists of a technology-controlled ventilation and heating system such as the Nest Learning Thermostat, smart lighting, programmable shades and sprinklers, smart intercom systems to communicate with people indoors as well as those at the door, and an intelligent alarm system. Fog computing can be used to create a personalized alarm system. It can also be used to automate certain events, such as turning on water sprinklers based on time and temperature.

2. Smart cities

Smart cities aspire to be automated at every front, from garbage collection to traffic management. Fog computing is particularly pertinent when it comes to traffic regulation. Sensors are set up at traffic signals and road barriers for detecting pedestrians, cyclists, and vehicles. Speedometers can measure how fast they are traveling and how likely it can result in a collision. These sensors use wireless and cellular technology to collate this data. Traffic signals automatically turn red or stay green for a longer time based on the information processed from these sensors.

3. Video surveillance

The most prevalent example of fog computing is perhaps video surveillance, given that continuous streams of videos are large and cumbersome to transfer across networks. The nature of the involved data results in latency problems and network challenges. Costs also tend to be high for storing media content. Video surveillance is used in malls and other large public areas and has also been implemented in the streets of numerous communities. Fog nodes can detect anomalies in crowd patterns and automatically alert authorities if they notice violence in the footage.

4. Healthcare

The healthcare industry is one of the most governed industries, with regulations such as HIPAA being mandatory for hospitals and healthcare providers. This sector is always looking to innovate and address emergencies in real-time, such as a drop in vitals. One way of doing it is using data from wearables, blood glucose monitors, and other health apps to look for signs of bodily distress. This data should not face any latency issues as even a few seconds of delay can make a huge difference in a critical situation, such as a stroke. 

5. Others

Other industries that use fog computing include retail, oil & gas, government & military, and hospitality. Personal assistants such as Siri and Alexa are available across devices and are compatible with most, such as smartwatches. This flexibility and presence mean that we can count on fog computing to become a crucial part of various industry verticals. Any enterprise that offers real-time solutions will need to incorporate fog computing into its existing cloud infrastructure.

See More: What Is Multicloud Infrastructure? Definition, Components, and Management Best Practices

Top 10 Fog Computing Best Practices to Follow in 2022

Implementing a fog engine comes with its own complications. Enterprises tend to go for a centralized approach with technical infrastructure as administration becomes easy. Setting up a decentralized set of heterogeneous fog devices throws up new challenges in terms of maintenance and compatibility. Here are the top 10 fog computing best practices to follow in 2022.

Fog Computing Best Practices

1. Ensure provision for flexibility

The beauty of fog computing lies in tying together varied hardware and software. When a flexible interfacing program isn’t available for this linking, things can get messy quickly. Web-based services and APIs must be created while keeping new physical and virtual sensors in mind. Besides integration with other fog nodes, the fog engine must also seamlessly integrate with the existing cloud solution.

2. Implement a fog console

Administrators must track all deployed fog nodes within the system and decommission them when required. A central view of this decentralized infrastructure can keep things in order and eliminate vulnerabilities that arise out of zombie fog devices. Besides a management console, a robust reporting and logging engine makes compliance audits easier to handle since fog components are bound by the same mandates as cloud-based services.

3. Apply access control at the fog node layer

In a traditional cloud-based setup, users directly access services from the cloud. This is why all cloud vendors come with their own access management system, which can be used with third-party identity and access management (IAM) solutions. With fog computing, the fog layers act as a middleman between the user and the cloud. This means that the fog engine must know who is requesting the service, and the same authorization process and policies hold good here.

4. Set up appropriate security tools & processes

One of the biggest challenges in fog computing is security, which isn’t as straightforward with a decentralized, local setup. User authentication is just the first step toward fog security. All data transmission must be encrypted, especially since the transfer mode is primarily wireless. Application signature validation is another crucial step with application service requests. Even when stored temporarily, sensitive user data is bound by compliance regulations. User behavior profiling is another feature that adds an extra layer of security.

5. Ensure small hardware & software footprint

Finding the right kind of hardware and software to go with each sensor is essential. While it may be tempting to over-engineer and add sophisticated devices at the fog level, the aim is to ensure minimum hardware and software footprint. Anything more will result in an expensive middle-level computation that can become a security liability. The role of each sensor and the corresponding fog node must be carefully considered. The lifecycle of each fog component can be automated to be handled from the central console.

6. Incorporate threat detection & mitigation at the fog level

Catching threats at the fog level even before they hit the main cloud infrastructure is the best security process that can be incorporated. The security component of the fog engine must also be tuned to spot anomalies in application and user behavior. With so many disparate components involved, it is easy to overlook hardware- or software-specific vulnerabilities. All security updates and patches must be applied with a set process and schedule in place.

7. Use relevant load balancing techniques

One of the biggest advantages of fog computing is a reduction in latency and freeing up of network traffic. This cannot be achieved if the fog nodes themselves aren’t monitored and load-balanced properly. Overloading or underloading of fog nodes needs to be avoided here. Quality of Service (QoS) parameters such as resource utilization, throughput, performance, response time, cost, and energy consumption can all be enhanced with load-balanced fog layers. 

8. Choose storage options based on requirements

The storage options at each sensor level depend on the type of sensors supported by the organization. Big media libraries work best with rotating disks, while local flash chips are ideal for security keys, log files, and tables. Anything that requires large in-memory storage needs a data server, though this must be avoided from the fog architecture altogether. When choosing hardware, it is important to consider the cost of storage per GB. 

9. Consider energy efficiency

The increased amount of hardware may quickly lead to a certain amount of overlooked extra energy consumption. Appropriate measures such as ambient cooling, low-power silicon, and selective power-down modes need to be implemented to maintain energy efficiency.

10. Design for uninterrupted fog services

Fog nodes need to run independently from the central system and each other. The system must be designed for high availability so that the outage of one node doesn’t bring down the entire service. Customized data backup schemes, based on the type and role of the fog node, must be implemented and reiterated regularly. 

See More: What Is IT Infrastructure? Definition, Building Blocks, and Management Best Practices

Takeaway

Fog computing enhances business agility while improving QoS. The faster the information is processed, the better the experience for users. This also means that employees do not need to operate on a choked-up network, and companies need not pay insane amounts for extended cloud storage. Cellular networks have become more reliable and stronger, even as technology grows in leaps and bounds. Considering the many positives and accelerants of fog computing, companies need to consider this system as naturally as they consider cloud computing while building their infrastructure.

Did this article help you understand fog computing in detail? Tell us on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We’d love to hear from you!

MORE ON COMPUTING