Market-leading public cloud Amazon Web Services today announced an improvement of its Auto Scaling feature that could prove useful to companies that depend on Amazon to keep their applications running, no matter what happens. Now it’s possible to set fine-grained policies for what to do with underlying cloud infrastructure under certain circumstances, like when CPU utilization reaches a certain threshold.

“Our goal is to allow you to create systems that can do an even better job of responding to rapid and dramatic changes in load,” Amazon Web Services chief evangelist Jeff Barr explained in a blog post on the news. “You can now define a scaling policy that will respond to the magnitude of the alarm breach in a proportionate and appropriate way. For example, if you try to keep your average CPU utilization below 50 percent, you can have a standard response for a modest breach (50 percent to 60 percent), two more for somewhat bigger breaches (60 percent to 70 percent and 70 percent to 80 percent), and a super-aggressive one for utilization that exceeds 80 percent.”

The Auto Scaling update won’t automatically multiply¬†Amazon Web Services’s annual revenue — which is now greater than $5 billion — by a factor of 10 overnight. But it should make AWS Auto Scaling customers feel more confident that they can dial in exactly the performance and cost they want.

Automatic provisioning of cloud infrastructure on its own is not a distinguishing feature of Amazon. (It’s been available since 2009.) Other major public clouds, like Microsoft Azure and Google Compute Engine, also offer it to customers. But this feature enrichment gives Amazon customers a new creature comfort — the kind of thing that might make companies think twice before moving to a different cloud.