Microsoft gave its cloud customers a free speed boost today with the launch of an Accelerated Networking feature that’s been in beta for almost a year and a half. When customers turn it on, their Azure compute instances will get access to up to 30 Gbps of network throughput.

Accelerated Networking taps into specialized chips that Microsoft has installed in its datacenters to offload the work of software-defined networking. That both frees up more compute resources on the tech titan’s cloud servers and provides customers with reduced latency and reduced jitter in their networking speeds.

It’s part of Microsoft’s ongoing work to make Azure more friendly for developers of high-performance applications, and it also shows the value of the company’s ongoing deployment of field-programmable gate arrays (FPGAs) inside its datacenters. Those chips, which can be programmed to perform particular processes faster than general processors, help drive the performance gains that Microsoft is touting.

The Accelerated Networking feature is available for most general-purpose and compute-optimized virtual machine instances with four or more vCPUs. (Instances that support hyperthreading require eight or more vCPUs.) It’s also limited by operating system compatibility — right now, customers can only enable it on instances running compatible versions of Windows Server, Ubuntu, SUSE Linux Enterprise Server, Red Hat Enterprise Linux, and CentOS.

Enabling Accelerated Networking doesn’t cost users anything extra, though it does take some work to get all of the SDN constructs set up properly.

The same hardware that’s powering Accelerated Networking also provides the foundation for Brainwave, a system that the company has developed for quickly running machine learning computations on top of a fleet of FPGAs. That means Microsoft can use some of its FPGA fleet for Accelerated Networking tasks, and then use the rest of it for other projects.