Edge computing and fog computing is on the rise in the tech industry, but with the embrace of the new technology comes some misunderstanding. Alan Conboy, CTO, Scale Computing brings clarity by analyzing the difference between fog computing and edge computing.
With each passing day, more companies, organizations, and government agencies are adopting edge computing as a way to process and analyze data closer to its point of creation. The fairly-nascent technology enables hyper-converged appliances of storage and compute to be placed much closer to the action and with far less latency than a trip to and from the cloud.
In the process, the technology has spawned a global army of Internet of Things (IoT) devices to collect data. It’s predicted that by 2020, the installed base of IoT devices will grow to almost 31 billion worldwide, producing a market worth nearly $9 trillion.
By putting compute resources, also known as â€œmicro private clouds,â€ closer to IoT sensors, edge computing accelerates decisions by eliminating the time it takes to transfer data to and from traditional data centers for analysis and recommendations.
That means we can address cyber threats directly at the point of attack. Military or emergency response teams can set up microdata centers in the field to run operations and communications. Manufacturing can apply data centers near or on the factory floor to predict inventories and equipment failures. Formula One teams can set up data centers outside race tracks to collect data from their race cars to know when to make changes to their engines.
The computational analysis in these scenarios is super fast. But is it â€œblink of the eyeâ€ fast?
Edge computing relies on 4G LTE and 5G connections to transport information between sensors and micro data centers, results in some latency. And that’s where fog computing comes into play.
Fog computing places the data center as close as possible to IoT sensors where data is collected. It produces fabric that connects from the edges of where information is created to where it will be stored, whether that’s in the cloud or a customer’s data center. It essentially combines a complete network of sensors, routers, compute, and storage in a smaller space next to where data is generated.
Imagine, for example, a complete data center in the trunk of a self-driven car, or in a high-speed train that can make quick changes in speed based on track or weather conditions. With fog computing, offshore oil rigs can make immediate pressure adjustments based on the conditions of gaskets and values.
Ultimately, decisions are made faster because we’re placing computational brains closer to where decisions have to be made.
For the sake of comparisons, let’s dive down a little deeper in the self-driven car scenario. With edge computing, the driver-less car can detect and respond to another vehicle stopped ahead after data is sent to and instructions are retrieved from the nearest micro data center.
Albeit the amount it time it takes is extremely short, a delay of even microseconds can result in a potential wreck. Fog computing removes that latency with an onboard data center that ensures those instructions are received and implemented instantaneously.
Fog computing also prioritizes processing jobs. Most of the tactical analysis like the safety status of a piece of equipment can be handled by fog computing, while the more strategic decisions around something like machine learning can take place on the traditional data center.
But while fog computing solutions might be super fast, their ability to store and process intensive workloads is limited to gigabytes of data and a smaller set of instructions. The devices are also less secure than traditional data centers, which exposes individual pieces of equipment to hackers.
Fog computing adoption is still extremely low, as is the number of solutions out on the market. Cisco, for example, markets a fog computing architecture that consists of connected cloud servers, network routers, access nodes, and IoT endpoints.
Organizations are being formed to help promote the adoption of fog computing. The OpenFog Consortium, for example, is working with members from multiple verticals to create an open reference architecture for use case models and testing to create an OpenFog ecosystem of new products and applications.
If the technology can deliver on expectations, we’ll likely seek adoption for fog computing take hold just as fast if not faster than edge computing. So, just when you thought you knew how to explain â€œthe cloudâ€ to your family and friends, here comes the fog!