We're thrilled to announce the return of GamesBeat Next, hosted in San Francisco this October, where we will explore the theme of "Playing the Edge." Apply to speak here and learn more about sponsorship opportunities here. At the event, we will also announce 25 top game startups as the 2024 Game Changers. Apply or nominate today!

One of the challenges in creating mobile apps is that it’s necessary to test how they perform in a variety of conditions in order to ensure that updates and feature additions actually work as intended. With that in mind, Rainforest QA launched a new service today that aims to use machine learning and crowdsourcing to help developers make better apps.

Rainforest QA has an army of 60,000 humans that help run through tasks developers want to test. That’s not necessarily unique in the mobile app testing space, but what is noteworthy is that the company also uses machine learning to help augment the testers’ efforts.

It’s a move that illustrates one of the key tenets of our AI-laden present: Humans and machines are often best when augmenting one another’s capabilities. Some situations require a human hand, while machine learning can help provide insights to improve products in ways that would be difficult for humans to manage.

One Rainforest QA system is used to determine how well testers perform based on their speed, accuracy, and consistency. The company then uses that information to programmatically determine the difficulty of tasks testers should be assigned. Testers with the highest reputation can even be assigned tasks to help train the machine learning systems.


VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.


Register Now

“For example, when test quality is harder to determine, elite testers sometimes manually review unclear test results or even the body of work of another tester,” Maciej Gryka, Rainforest QA’s head of data science, said in an email. “After helping classify this data with higher reliability, the input is used as training data to improve the machine learning model, which in turn improves the overall performance and accuracy of the platform.”

Another system works to determine anomalous reports from testers in order to minimize false positives. That’s because an inaccurate bug report could lead to wasted development time for Rainforest QA’s customers. The system the company built dynamically generates accuracy scores for particular test results based on the underlying reputation of the testers providing them.

The accuracy score is then used to determine which reports to send on to Rainforest customers in order to suggest what problems they need to fix.

The service has been in closed beta since February. In the intervening months, more than 3,000 testers have worked through over 20,000 tests.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.