Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Software engineering management platform Jellyfish has launched what it’s calling the industry’s “first comparative benchmarking tool,” one that enables engineering leads to verify how well they’re performing against other companies.
Jellyfish Benchmarks, as the product is called, is based on the company’s own internal data, which it garners and collates when engineering teams opt-in to share their anonymized data with the broader pool.
Founded in 2017, Jellyfish’s core mission is to align activities from engineering teams with companies’ business objectives. It does this by analyzing myriad engineering “signals,” gleaned from developer tools such as issue trackers and source code management platforms, as well as project management tools. It’s all about establishing what teams are working on, tracking the progress they’re making and how individual teams and workers are performing.
By ushering in aggregated, pan-industry engineering data, this brings more context to the mix, allowing companies to compare and contrast internal figures with those from their peers across sectors.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
So, what kind of benchmarks does Jellyfish now serve up? Users have access to more than 50 metrics, including time-invested in growth; issues resolved; deployment frequency; pull requests merged; coding days; incident rate and mean time to repair (MTTR); among many others.
“Importantly, Jellyfish includes benchmarking for how teams are allocating or investing their time and resources — this helps teams understand how they compare on their time investments into innovation, support work, or keeping the lights on, for example,” Jellyfish product head, Krishna Kannan, told VentureBeat.
At the time of writing, some 80% of Jellyfish customers opt-in to sharing their anonymized data into the benchmarking datasets and it’s only those who will be able to benefit from this new product. To get a little, you have to give a little, is the general idea.
“When Jellyfish customers onboard, they are offered the opportunity to leverage industry benchmarks built upon anonymized datasets from other Jellyfish customers — customers who opt-in will have their data anonymized and added to the benchmarking Jellyfish customer pool,” Kannan said. “In the rare instances where customers opt out of this opportunity, their dataset will not be added, but neither will they be able to leverage benchmarking as a feature.”
While software development teams arguably have access to more engineering data than ever, it’s not always possible to know from this data how well teams are actually performing on an ongoing basis — maybe they are doing well compared to historical figures, but are still hugely underperforming compared to companies elsewhere. This is the ultimate problem that Jellyfish Benchmarks seeks to address.
It’s also worth noting that Jellyfish rival LinearB offers something similar in the form of Engineering Benchmarks, spanning nine metrics. However, Jellyfish says that it caters to dozens of metrics, which could open the utility to a wider array of use-cases. *
“The reality we’ve found is that different teams are looking to optimize different metrics depending on their product, stage, business goals and so on,” Kannan said. “That’s why we’ve included benchmarking for whichever metrics our customers care most about.”
*Updated to correct a previous statement that suggested LinearB’s benchmarks’ product wasn’t fully integrated into its platform.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.