LinkedIn today released an open source project called Dynamometer to help businesses stress-test large-scale Hadoop big data processing systems without using a massive amount of infrastructure.

The tool is designed to prevent an issue that the enterprise social network encountered in early 2015 when the company added 500 machines to its Hadoop Distributed Filesystem (HDFS) cluster in an attempt to improve performance. Instead, the team ran into a bug that only showed up at large scale and that caused jobs targeting the cluster to time out.

Dynamometer, which is named after a tool used to test cars, simulates large-scale clusters while only requiring roughly 5 percent of the actual underlying infrastructure. That helps developers get around one of the key issues with testing software at scale: Actually provisioning all of the machines can be costly, even in a public cloud environment.

Instead, customers can use Dynamometer to test the same sorts of workloads they see in production and ensure that the system will stand up to software changes. LinkedIn used the tool to analyze migration of the company’s HDFS clusters from Hadoop 2.3 to 2.6, a change that required adjusting certain parameters of the clusters in order to avoid performance issues.

Erik Krogen, a lead engineer on Dynamometer, told VentureBeat in an email that the tool is meant both for companies working with Hadoop at large scale, like LinkedIn, and smaller shops that are proposing changes to the HDFS open source project and want to make sure they won’t affect performance at scale.

In the long run, Krogen hopes that Dynamometer will become part of release testing for HDFS, as well as regular continuous integration of new code changes between releases. That’s why LinkedIn released it to the public as an open source project. The company already used Dynamometer to help with the release of Hadoop 2.7.4, which allowed it to verify that the maintenance release didn’t negatively impact performance.