Presented by Gcore

Many factors go into improving the performance and the experience of websites and online services. But improvements in the way applications and services are architected and deployed can make a big difference. First-generation CDNs (content distribution networks) began by staging content closer to reduce latency and improve the experience. Now companies like Gcore are exploring various approaches to increase the performance of websites further that build on edge compute and edge optimization techniques used across the industry.

An edge compute approach moves both content and applications closer to users. This takes advantage of newer decentralized infrastructure to break applications into smaller chunks and run on the edge of the network.

An edge optimization approach improves application performance using the same application development and deployment architectures. This focuses on improving the underlying distribution architecture rather than changing the apps themselves.

Edge compute popularity

Interest in edge computing has been growing rapidly in response to new tools for microservices, containerization and decentralized technology. Edge computing is also becoming essential in many IoT use cases in factories, industrial automation and self-driving cars. In addition, many Web3 advocates are exploring how decentralized apps could use blockchains to improve supply chain transparency and more efficient transactions.

In response, vendors have started exploring how these edge computing applications could be combined with CDN infrastructure to improve the performance of websites. For example, Cloudflare Workers allows developers to create lightweight functions that run at the edge of the network rather than on centralized servers.

Similarly, others are exploring how new apps built on WebAssembly (Wasm) could provide a portable format for writing executable programs on the edge closer to users. Wasm was initially conceived to run natively on the browser as a faster alternative to JavaScript, but efforts like WasmEdge are exploring how it might bring cloud-native and serverless apps to the edge as well.

Challenges of edge compute

Edge computing may make a lot of sense in certain use cases. However, it requires a new workflow for web applications. Companies cannot just migrate their existing apps to the edge. Dmitriy Akulov, director of Edge Network stream at Gcore, explained, “It requires you to rewrite your whole app for the framework, which is a huge problem for most companies.” 

For example, developers have to split existing legacy apps and convert them into workers that are limited in terms of memory and performance. In addition, the new apps tend to be specific to the platform, which makes migrating back out a challenge.

These apps also need to be written with an awareness of the limitations of the edge. For example, when they store or read files, they need to be aware that the file system is distributed, which has a much higher latency than a local file system. There needs to be a mechanism to ensure that applications running in two different regions, such as Tokyo and New York, do not make conflicting changes to a file.

Optimize infrastructure to simplify the apps

Vendors like Gcore are taking a different approach to optimize the way experiences are delivered via the edge without having to refactor existing apps. “In most cases, people don’t need edge compute for running the whole service, or even parts of the service,” Akulov said.

Gcore has implemented several features that emulate the low latency experience of edge computing for existing apps. For example, Gcore connects its CDN to a cloud storage service to enable permanent caching. After the first request, the content is automatically staged near the user to reduce latency.

Features like microcaching can help cache queries to a database to reduce the number of calls. This is important for many web applications like blogs that may pull in fifty different assets from a centralized database, each requiring a separate request and response. With microcaching, all these requests are handled closer to the user without requiring as many round-trips to the database on the backend.

The edge infrastructure can also help to optimize the content on the fly. This allows users to focus on creating one set of high-quality content, and then the edge infrastructure can optimize it for different devices, browsers, and screen sizes.

While most content is distributed as JPG or PNG files, these tend to be much larger than newer formats like WebP and AVIF. Converting a larger file into one of these newer formats can take a lot of computing horsepower. So Gcore built its CDN on powerful 3rd Generation Intel® Xeon® Scalable processors to dynamically optimize images and then cache them near the edge for other users when required.

Akulov acknowledged that edge computing makes more sense than edge optimization in some limited use cases. Indeed, they are working on new edge computing products that do not require teams to refactor existing apps as much as traditional approaches. But most enterprises can see significant improvements with edge optimization today without substantial changes to their apps.

“If you are just a small business, even a small coffee shop, you can still get full benefit without having to know what edge compute is,” he said.

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact