At its re:Invent user conference in Las Vegas today, public cloud infrastructure provider Amazon Web Services (AWS) announced the launch of AWS Batch, a service for automating the deployment of batch processing jobs.

In the past decade or so, people have relied on the Hadoop open-source big data software to do batch processing, and AWS and other public clouds have come up with managed versions of Hadoop and additional services that cater to batch and streaming workloads. Now AWS is trying to more directly meet the needs of developers who want to process lots of data automatically, even if it doesn’t happen instantly. (Not that it’s the first cloud to do that, though — AWS’ biggest competitor, Microsoft Azure, introduced the Azure Batch batch processing service two years ago.)

And it’s designed to work with containers as opposed to the more traditional virtual machines (VMs). Customers can provide the exact container images that need to be run on top of the AWS EC2 computing infrastructure. (Shell scripts and Linux executables are also supported, and it will be possible to run Lambda functions in the future.)

On top of that, it’s able to take advantage of cheaper EC2 instances — specifically those¬†available on the spot market. But customers can specify the types of instances they’d like, as well as minimum and maximum compute resources.

“In the past, many AWS customers have built their own batch processing systems using EC2 instances, containers, notifications, CloudWatch monitoring, and so forth. This turned out to be a very common AWS use case and we decided to make it even easier to achieve,” AWS chief evangelist Jeff Barr wrote in a blog post.

The new AWS service doesn’t cost anything, but charges are incurred based on the resources that the service sets up for you. It’s in preview today, but only through AWS’ US East (Northern Virginia) region, Barr wrote.