6 edge computing trends to watch in 2022
Blog: The Enterprise Project - Enterprise Technology
6 edge computing trends to watch in 2022
December 6, 2021 – 3:00am
While many aspects of edge computing aren’t new, the overall picture continues to evolve quickly. For example, “edge computing” encompasses the distributed retail store branch systems that have been around for decades. The term has also swallowed all manner of local factory floor and telecommunications provider computing systems, albeit in a more connected and less proprietary fashion than was the historical norm.
However, even if we see echoes of older architectures in certain edge computing deployments, we also see developing edge trends that are genuinely new or at least quite different from what existed previously. And they’re helping IT and business leaders solve problems in industries ranging from telco to automotive, for example, as both sensor data and machine learning data proliferates.
[ How can automation free up more staff time for innovation? Get the free eBook: Managing IT with Automation. ]
Edge computing trends that should be on your radar
Here, edge experts explore six trends that IT and business leaders should focus on in 2022:
1. Edge workloads get fatter
One big change we’re seeing is that there’s more computing and more storage out on the edge. Decentralized systems have often existed more to reduce reliance on network links than to perform tasks that couldn’t practically be done in a central location assuming reasonably reliable communications. But that’s changing.
IoT has always involved at least collecting data almost by definition. However, what could be a trickle has now turned into a flood as the data required for machine learning (ML) applications flows in from a multitude of sensors. But even if training models are often developed in a centralized datacenter, the ongoing application of those models is usually pushed out to the edge of the network. This limits network bandwidth requirements and allows for rapid local action, such as shutting down a machine in response to anomalous sensor readings. The goal is to deliver insights and take action at the moment they’re needed.
[ Need to talk edge with colleagues, customers, or partners? Get a shareable primer: How to explain edge computing in plain English. ]
2. RISC-V gains ground
Of course, workloads that are both data- and compute-intensive need hardware on which to run. The specifics vary depending upon the application and the tradeoffs required between performance, power, cost, and so forth. Traditionally the choice has usually come down to either something custom, ARM, or x86. None are fully open, although ARM and x86 have developed a large ecosystem of supporting hardware and software over time, largely driven by the lead processor component designers.
But RISC-V is a new and intriguing open hardware-based instruction set architecture.
Why intriguing? Here’s how Red Hat Global Emerging Technology Evangelist Yan Fisher puts it: “The unique aspect of RISC-V is that its design process and the specification are truly open. The design reflects the community’s decisions based on collective experience and research.”
This open approach, and an active ecosystem to go along with it, is already helping to drive RISC-V design wins across a broad range of industries. Calista Redmond, CEO of RISC-V International, observes that: “With the shift to edge computing, we are seeing a massive investment in RISC-V across the ecosystem, from multinational companies like Alibaba, Andes Technology, and NXP to startups like SiFive, Esperanto Technologies, and GreenWaves Technologies designing innovative edge-AI RISC-V solutions.”
3. Virtual Radio Access Networks (vRAN) become an increasingly important edge use case
A radio access network is responsible for enabling and connecting devices such as smartphones or internet of things (IoT) devices to a mobile network. As part of 5G deployments, carriers are shifting to a more flexible vRAN approach whereby the high-level logical RAN components are disaggregated by decoupling hardware and software, as well as using cloud technology for automated deployment and scaling and workload placement.
Hanen Garcia, Red Hat Telco Solutions Manager, and Ishu Verma, Red Hat Emerging Technology Evangelist, note that “One study indicates deployment of virtual RAN (vRAN)/Open RAN (oRAN) solutions realize network TCO savings of up to 44% compared to traditional distributed/centralized RAN configurations.” They add that: “Through this modernization, communications service providers (CSPs) can simplify network operations and improve flexibility, availability, and efficiency—all while serving an increasing number of use cases. Cloud-native and container-based RAN solutions provide lower costs, improved ease of upgrades and modifications, ability to scale horizontally, and with less vendor lock-in than proprietary or VM-based solutions.”
4. Scale drives operational approaches
Many aspects of an edge computing architecture can be different from one that’s implemented solely within the walls of a datacenter. Devices and computers may have weak physical security and no IT staff on-site. Network connectivity may be unreliable. Good bandwidth and low latencies aren’t a given. But many of the most pressing challenges relate to scale; there may be thousands (or more) network endpoints.
Kris Murphy, Senior Principal Software Engineer at Red Hat, identifies four primary steps you must take in order to deal with scale: “Standardize ruthlessly, minimize operational ‘surface area,’ pull whenever possible over push, and automate the small things.”
For example, she recommends doing transactional, which is to say atomic, updates so that a system can’t end up only partially updated and therefore in an ill-defined state. When updating, she also argues that it’s a good practice for endpoints to pull updates because “egress connectivity is more likely available.” One should also take care to limit peak loads by not doing all updates at the same time.
[ Want to learn more about implementing edge computing? Read the blog: How to implement edge infrastructure in a maintainable and scalable way. ]
5. Edge computing needs attestation
With resources at the edge tight, capabilities that require little to no local resources are the pragmatic options to consider. Furthermore, once again, any approach needs to be highly scalable or otherwise the uses and benefits become extremely limited. One option that stands out is the Keylime project. “Technologies like Keylime, which can verify that computing devices boot up and remain in a trusted state of operation at scale should be considered for broad deployment, especially for resource-constrained environments” as described by Ben Fischer, Red Hat Emerging Technology Evangelist.
Keylime provides remote boot and runtime attestation using Integrity Measurement Architecture (IMA) and leverages Trusted Platform Modules (TPMs) which are common to most laptop, desktop, and server motherboards. If no hardware TPM is available, a virtual, or vTPM, can be loaded to provide the requisite TPM functionality. Boot and runtime attestation is a means to verify that the edge device boots to a known trusted state and maintains that state while running. In other words, if something unexpected happens, such as a rogue process, the expected state would change, which would be reflected in the measurement and would take the edge device offline, because it entered an untrusted state. This device could be investigated and remediated and put back into service again in a trusted state.
6. Confidential Computing becomes more important at the edge
Security at the edge requires broad preparation. Availability of resources, such as network connectivity, electricity, staff, equipment, and functionality vary widely, but are far less than what would be available in a datacenter. These limited resources limit the capabilities for ensuring availability and security. Besides encrypting local storage and connections to more centralized systems, confidential computing offers the ability to encrypt data while it is in use by the edge computing device.
This protects both the data being processed and the software processing the data from being captured or manipulated. Fischer argues that “confidential computing on edge computing devices will become a foundational security technology for computing at the edge, due to the limited edge resources.”
According to the Confidential Computing Consortium’s (CCC) report by the Everest group, Confidential Computing – The Next Frontier in Data Security, “Confidential computing in a distributed edge network can also help realize new efficiencies without affecting data or IP privacy by building a secure foundation to scale analytics at the edge without compromising data security.” Additionally, confidential computing “ensures only authorized commands and code are executed by edge and IOT devices. Use of confidential computing at the IOT and edge devices and back end helps control critical infrastructure by preventing tampering with code of data being communicated across interfaces.“
Confidential computing applications at the edge range from autonomous vehicles to collecting sensitive information.
Diverse applications across industries
The diversity of these edge computing trends reflect both the diversity and scale of edge workloads. There are some common threads – multiple physical footprints, the use of cloud-native and container technologies, an increasing use of machine learning. However, telco applications often have little in common with industrial IoT use cases, which in turn differ from those in the automotive industry. But whatever industry you look at, you’ll find interesting things happening at the edge in 2022.
[ Want to learn more about edge and data-intensive applications? Get the details on how to build and manage data-intensive intelligent applications in a hybrid cloud blueprint. ]