This article is part of the Technology Insight series, made possible with funding from Intel.
With 2020 predictions looming, there’s sure to be a fresh wave of hype around the edge and 5G. Now’s an ideal time to solidify and update your understanding, and explore how they’ll complement each other. If you’re processing payments, taking online orders, detecting fraud, in the financial services industry, or exploring machine learning, these two technologies could help keep you competitive in the coming months.
What and why
Edge computing is all about processing information from devices closer to where it’s being created, rather than shuttling it back and forth from the cloud. Together with 5G, computing at the edge paves the way for applications that wouldn’t have been possible before. Think augmented and virtual reality, where ultra-low latency keeps what you see in sync with what you do, or autonomous vehicles that need to make split-second decisions based on huge volumes of data.
Globally, IDC forecasts 150 billion connected devices (including RFID) by 2025, many of which will pump out data in real time. In 2017, real-time data was a mere 15% of all information created, captured, or replicated. In 2025, it’s expected to reach 30%. As a percentage, that might not sound transformational. But it’s an order of magnitude higher in raw capacity (from ~5 to ~50 zettabytes). Edge computing gives you the power to perform intelligent analysis based on that deluge of real-time data almost instantaneously, all while minimizing your bandwidth expenses.
Yes, it’s nearly 2020. But the definition of “edge” still often varies depending on who you ask. From NIST to IEEE, models are still evolving. It could be the Raspberry Pi in a refrigerated big rig selectively sending sensor information to the cloud or the node processing game data for Google’s streaming Stadia platform. Although miles of proximity separate those two interpretations, they both put compute resources closer to the user.
Thanks to the collaborative, vendor-neutral 2018 State of the Edge report, there now exists a more explicit definition with some industry consensus behind it:
- The edge is a location, not a thing.
- There are lots of edges, but the edge we care about is the edge of the last-mile network.
- This edge has two sides: an infrastructure edge and a device edge.
- Compute will exist on both sides, working in coordination with the centralized cloud.
For clarity, the device edge includes end-points like phones, drones, AR headsets, IoT sensors, and connected cars; gateway devices like switches and routers; and on-premise servers. They’re all on the downstream side of your last-mile cellular or cable network. The infrastructure edge exists on the upstream side. That’s where you find your compute resources collocated with network access equipment and regional datacenters.
In our big rig example, the Raspberry Pi is on the device edge. Rather than chewing up bandwidth by continually transmitting environmental data, it processes locally and only phones home in the event of an emergency. Conversely, hyper-local datacenters streaming a 4K gaming experience at 60 frames per second live on the infrastructure edge. Although the device edge offers a narrow latency advantage, you’re obviously going to find much more powerful hardware further upstream.
Beyond the distributed infrastructure edge lies the core network and, ultimately, the cloud, which is more centralized and scalable. But by the time you get to the cloud, latency is much higher (and far less consistent).
Benefits of low-latency edge computing
It’s easy to point at low latency as the killer application of edge computing, particularly as cloud-based software strains under the limitations of physics. Data cannot move any faster than the speed of light, so requests to servers hundreds or thousands of miles away inevitably take tens or hundreds of milliseconds to fulfill. The difference isn’t perceivable as you scroll through your Twitter feed. But those numbers wouldn’t be acceptable to a surgeon operating remotely or a gamer in virtual reality. Above all else, processing at the edge shaves away latency to keep data relevant.
Edge computing also saves you from shuttling every bit of data back and forth between connected devices and the cloud. If you can determine the value of information close to where it’s created, you can optimize the way it flows. Limiting traffic to just the data that belongs on the cloud cuts down on bandwidth and storage costs, even for applications that aren’t sensitive to latency.
Reliability stands to benefit from edge computing, too. A lot can go wrong between the device edge and centralized cloud. But in rugged environments like offshore platforms, refineries, or solar farms, the device and infrastructure edges can operate semi-autonomously when a connection to the cloud isn’t available.
Distributed architectures can even be a boon to security. Moving less information to the cloud means there’s less information to intercept. And analyzing data at the edge distributes risk geographically. The endpoints themselves aren’t always easy to protect, so firewalling them at the edge helps limit the scope of an attack. Further, keeping data local may be useful for compliance reasons. An edge infrastructure gives you the flexibility to limit access based on geography or copyright limitations.
The edge is pervasive; 5G makes it better
Edge computing is not new. As far back as 2000, content delivery networks were being referred to as edge networks. But it’s universally accepted that as 5G coverage grows, edge computing is going to help address the high-bandwidth, low-latency requirements of modern applications with local, rather than regional, compute resources.
The technology underlying 5G will add speed, reliability, and flexibility to enterprise applications by getting compute resources closer to where data is being created. Information will move between 5G networks efficiently, rather than requiring a trip to the centralized cloud and back. As a result, we’re going to be looking at use cases that previously weren’t possible.
According to the 2020 State of the Edge report, the largest demand for edge computing comes from communication network operators virtualizing their infrastructure and upgrading their networks for 5G. Mobile consumer services running on those networks are going to rely on edge computing to enable streaming game platforms, augmented/virtual reality, and AI.
Smart homes, smart grids, and smart cities all share a proclivity for device edge platforms. As those use cases evolve and become more sophisticated, though, there will be demand for infrastructure edge capabilities, too. 5G’s provisions for ultra-reliable low-latency communications (URLLC) and massive machine-type communication (mMTC) mean the devices and edge can be even closer together, making their short connection more efficient.
And who could forget about the autonomous automobile, the poster child for edge computing enhanced by 5G? Modern automobiles already utilize compute resources on the device edge for collision avoidance, lane-keeping, and adaptive cruise control. But as assisted and autonomous driving features become more sophisticated, infrastructure edge resources will be required to add intelligence that could only come from the surrounding environment. Good examples: rerouting a trip based on traffic miles ahead, communicating with other autonomous vehicles to accelerate from a stoplight in unison, or making split-second decisions to avoid unsafe situations.
The edge is still young
The empowered edge is one of Gartner’s Top 10 Strategic Technology Trends for 2020. However, several other concepts on its watchlist have roots in edge computing as well. Hyperautomation, which deals with the application of artificial intelligence and machine learning to augment human input, is going to rely on a foundation of low latency and unflagging reliability. The multiexperience is another example, incorporating multisensory and multitouchpoint interfaces dependent on high bandwidth and real-time processing. Of course, autonomous things are all about AI, 5G, and edge computing.
Enabling those novel use cases will require substantial investment. A forecast model by Tolaga Research predicts a cumulative CapEx spend of $700 billion between now and 2028 on edge IT and datacenter infrastructure.
As the computing pendulum swings from the centralized cloud to a distributed edge, opportunities abound, particularly in the maturing infrastructure edge. Understanding the impact of edge computing and 5G will allow you to provide seamless customer experiences, test new markets, and act on insights in real-time.