Cloud vs. the Edge? Key Strategies to Optimize Critical Distributed Applications


Where to apply assets – cloud versus the edge – is not an “either-or” choice. It is about blending resources for the best performance along a spectrum of choices. This deeper understanding of potential application behavior empowers designers to allocate tasks most appropriately, discusses David Sprinzen, VP of marketing at Vantiq.

The promise of edge computing is here today. Across multiple industries and applications – including agriculture, mobility and autonomous vehicles, oil and gas, smart energy grids, emergency response, industrial monitoring and maintenance, healthcare, weather and environmental monitoring, and cloud-based gaming and entertainment – edge computing is delivering. Collecting and processing data at the edge results in significant performance gains and amazing new customer experiences.

Edge computing places storage and processing power closer to the actual source of the data, such that faster response times – as well as the ability to act on real-time events – can be achieved. With edge computing, IT can develop new capabilities and applications to continuously monitor people, places and devices across diverse physical and digital environments.

See More: The Future of Edge Computing: Five Trends To Watch

What Is the Dynamic Edge?

Consulting firm AccentureOpens a new window explains that edge computing is “an emerging computing paradigm which refers to a range of networks and devices at or near the user” and “is about processing data closer to where it’s being generated, [and] enabling processing at greater speeds and volumes, leading to greater action-led results in real time.”

The edge decentralizes computing power to build dynamic new environments and enhanced user experiences, like automated retail shopping and smart building management. To support the delivery of next-generation business applications, edge computing integrates networked sensors, devices and the IoT and digital twins, as well as legacy systems. Today’s dynamic edge integrates the key concepts of multi-access edge computing (MEC), which focuses on edge processing that reduces network congestion and improves application performance.

Edge applications are part of a broader range of distributed applications that operate across multiple network hosts, from edge devices to local systems and public and private clouds. The most effective edge-native applications optimize the placement of critical workloads, and system architects must determine how workloads should be allocated for optimal performance.

However, making design decisions about where to run different elements of an application is challenging. Yet as we drive towards gaining value from sensors and the IoT by migrating nodes from the cloud to the edge, it is not possible to migrate all elements of a given system or application. As a result, IT professionals must decide how to architect systems that operate in part on the edge and in part on the cloud. This “edge versus cloud” scenario mandates new strategies to navigate our new physical-digital world.

Designers must account for the global cloud, as well as regional clouds – which can become MECs and edge nodes – and the distant edge and the device edge. Each location on the network has its specific advantages and disadvantages. For example, the global cloud is mainly utilized for storage, as well as historical data analysis and global system optimization. 

Key Focus Areas for the Dynamic Edge

Because the dynamic edge is all about achieving the performance needed to deliver innovative applications and new user experiences, IT must optimize the way that devices, compute power, and storage interoperate. 

Key focus areas should include:

1. Latency

In most cloud scenarios, data and processing are centralized in the data center. This aids in terms of access and collaboration, but centralized servers are typically remote from data sources. In a dynamic edge application, longer data transmission times due to network latency can lead to a poor experience.

When it comes to latency, running edge applications as close as possible to edge device(s) is the preferred strategy. While some less latency-sensitive functionality, like after-the-fact data analysis, is fine operating in the cloud, the key takeaway is that when latency should be low, processing should be pushed to the edge.

2. Bandwidth

Moving data requires both time (latency) and expense, making it especially important to transport as little data as possible. Ensuring that your application isn’t shipping too much data to the cloud can be accomplished through data aggregation and processing at the edge prior to sending it to remote data center-based servers. For example, sending only results to the cloud in the form of already-processed data uses significantly less bandwidth when compared to shipping raw data across networks. It is essential to consider how to de-aggregate or filter data in order to transform large datasets into smaller summaries or data of interest.

3. Compute and storage 

While cloud computing is typically considered inexpensive, there are significant costs for moving data to and from the cloud, along with latency and bandwidth factors. Therefore, system architects must design edge applications with compute and storage in locations in a way that balances cost and performance factors.

While edge resources can meet the real-time compute needs of dynamic applications, they’re ultimately limited by the processing power of edge devices, meaning there is no “vertical scalability.” Distributed deployment strategies can help by using edge processing for time-sensitive application elements and filtering out expendable, non-critical data.

The edge is more expensive in terms of storage, and there are fewer available resources. Avoid treating the edge like the cloud and instead ensure that the heavy lifting in terms of computing and storage is still done in the cloud. Simply put, place time-sensitive application elements on the edge and keep big loads in the datacenter/cloud. 

4. Emerging Design Requirements

A new and more flexible design approach is required for dynamic edge applications to achieve both high performance and cost/resource efficiency. Where to apply assets is not an “either-or” choice; it is more about blending resources for the best performance along a spectrum of choices. This approach must allow for modeling that helps to predict the behavior of edge and data center computing and storage assets, as well as for predicting issues around the network – including bandwidth requirements and latency. 

This deeper understanding of potential application behavior empowers designers to allocate tasks to the most appropriate hosts. It also helps to drive towards a more efficient “develop once, deploy anywhere” model. While predicting the behavior of edge applications can be difficult until real-world workloads are implemented, upfront planning and simulation can provide meaningful guidance for developers. Ultimately, this approach helps deliver new and improved user experiences more efficiently and easily than ever before.

Top Five Tips When Evaluating Cloud vs. the Edge

    • Place compute and storage resources where cost and performance factors are most balanced – think of your strategy along a spectrum versus making “either-or” choices.
    • When latency must be low, move processing to the edge.
    • To limit bandwidth usage, minimize the amount of data moved to the cloud.
    • Think about sending summary data or only data of interest to the cloud for analysis.
    • Avoid treating the edge like the cloud: ensure that heavy compute and storage tasks reside in the data center.

The cloud or the edge? What’s your choice? Share with us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window .

Image Source: Shutterstock