We have seen article after article about the fact that tech companies struggle with diversity at all levels of hiring, leading to toxic cultures for minorities (i.e. pre-2017 Uber). Even the algorithms and AI that underpin these products can have racial or gender biases, with embarrassing outcomes.
One topic that has been absent from this conversation is diversity and representation in early product testing. This stage of product development is hugely influential to the direction of a product, as well as who it ultimately serves. For example, if the majority of people who use a brand-new product are high-income white men who work in tech (which early adopters tend to be), then most of the user feedback the product team receives will serve to tailor that product to their needs and may not be generalizable to the needs of a broader audience.
It is common wisdom that product-market fit is achieved by building for the small group of people who love your product. If product research and roadmaps are based on feedback from early adopters and those early adopters are not diverse, how can we build tech that serves a broader segment of society?
Diversifying the feedback loop
We are working through this issue at the startup I work for, Neeva. Since we are quite new, we have created a waitlist for folks who want to test our product for free before we launch publicly. The vast majority of people on our waitlist are men, and a significant number of them work in tech.
We set out to do some research on how to attract more diverse sets of people to test an early-stage product and found a profound lack of resources for early stage startups looking to attract well-rounded audiences (and not pay a ton of money in the process, a common worry for pre-revenue companies). There seemed to be little attention paid to this topic, resulting in a lack of data on the demographics of early product adopters and testers. So we have had to forge our own way for the most part.
First, we checked for skewed demographics in our signup list by plotting the distribution by key demographic slices.
When we sliced our signup data by basic attributes. As you can see above, it was clear that certain demographics were over-represented. One contributing factor was that many of our users heard about us from tech publications and forums, which may not reflect the makeup of the overall US population. This has subsequently influenced how we try to attract new audiences post-launch.
We then had to determine how to avoid building only for testers that fit the “early adopter” profile. Once testers were on our platform, we performed “stratified sampling” based on demographics, which is just a fancy way of saying we sampled within each category and then combined those sub-samples to create the overall sample. This ensured each demographic was appropriately represented in our sample. We used this methodology both when selecting users to poll for feedback and when selecting users to participate in research. This ensured that the majority viewpoint did not get over-sampled.
We also built these demographic slices directly into our dashboards (i.e. usage by gender a, gender b, gender c, etc). The key here is to not apply the slice as just a “filter,” since it would be difficult to compare across filtered results in a systematic way, but build it into the dashboard as a core view.
We also used tools like SurveyMonkey and UserTesting to find diverse sets of people and understand their needs when it came to our product. This feedback helped influence our roadmap and supplemented tester feedback. One thing to remember with self-reported data, diverse or otherwise, is that it is important to remove hurried or inconsistent responses. I’ve included a few examples below of questions you can use to weed out low-quality responses.
Finally, it is important to make sure that the diverse slices are large enough to be statistically significant: otherwise, you have to treat the data as being directional in nature only.
More perspectives leads to better products
All of this work helped us understand that testers across the country, despite their profession, were quite knowledgeable about the applications of our product (ad-free search). They were also very aware of the influence of advertiser dollars on the products they use — which meant there were real problems we could solve for them.
Minority groups of testers, although small percentage-wise, have meaningfully influenced our product direction. (And “minority” here can refer to any minority demographic, whether it be race, profession, interest, etc.) An example: By speaking with parents across all genders (~30% of our testers), we learned that family plans, where we can create safer and more private experiences for children and teens, would be a key differentiator in their search experience. Based on minority group feedback, we are also considering allowing people to find small boutique retailers, or those who only sell sustainably sourced products to avoid having results dominated by the obvious large retailers.
By taking the time to deeply analyze our data and balance our research, we have discovered audiences we didn’t consider part of our target market originally. We are building a product that is useful beyond the bubble of early adopters for all sorts of use cases.
Sandy Banerjee is Head of Marketing at Neeva.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more