Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


In a paper published this week on the preprint server Arxiv.org, researchers affiliated with Rutgers, the University of California, and the University of Washington propose an approach to mitigate what they characterize as an “unfairness problem” in product recommendation algorithms. They say their algorithm provides high-quality explainable recommendations on state-of-the-art real-world data sets, and that it reduces the recommendation unfairness in several key aspects.

It’s an open secret that AI systems and the corpora on which they’re trained often reflect race, gender, and other stereotypes and biases; indeed, Google recently introduced gender-specific translations in Google Translate chiefly to address gender bias. The researchers peg the blame on an imbalance of AI model training data. For instance, economically disadvantaged groups are disinclined to make large numbers of purchases on ecommerce platforms like Amazon and eBay, which leads to insufficient historical user-item interactions. Because recommendation systems learn from these user-item interactions, the systems become subject to bias that leads to certain users being treated unfairly.

The researchers sought to address the problem specifically for knowledge graph-based systems, which preserve structured and relational knowledge and thus elucidate the reasons for recommendations. They studied Amazon data sets for four item categories — CDs and vinyl, clothing, cell phones, and beauty — and analyzed the differences between inactive users (i.e., those who don’t purchase anything) and active users (purchasers). The results appear to show that although the inactive group constitutes the majority of users (95%) in the data set, their user-item data patterns lack diversity. Moreover, the inactive group obtains far worse scores on Simpson’s Index of Diversity — a metric originally developed to assess biodiversity that is now used to quantify unfairness — compared with the active group.

The team proposes a “fairness-aware” path reranking algorithm for explainable recommendations to remedy this, following from the observation that knowledge graphs consider user-item paths as pertinent recommendation signals. For instance, a person might wish to purchase the same bracelet as another person since both purchased the same key chain, and they might also consider a sweater because it matches the brand of a previous purchase. Given a set of users from different groups, the goal of the team’s algorithm is to maximize recommendation quality by ranking the paths for each user under the constraints of group and individual fairness. (In this context, group fairness refers to when users from two groups maintain the same probability, while individual fairness implies that similar individuals are treated similarly.)

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

The researchers’ algorithm takes into account:

  1. A path score that considers a more varied set of paths beyond the kinds that dominate the historic user-item interaction data, specifically by incorporating debiasing weighting.
  2. A diversity score that considers the fairness with respect to explainable path diversity for each user-item pair.
  3. A recommendation score that calculates the preference scores between users and items.
  4. A ranking score that ranks items to determine the order in which they’re to be presented, adhering to the recommendation score and the fairness score.

In experiments on the aforementioned Amazon data sets, the researchers say their algorithm improved in terms of both recommendation and fairness effectiveness compared with a baseline recommendation system. On the clothing data set, it achieved 3.101% on normalized discounted cumulative gain (which evaluates ranking quality by considering the position of correctly recommended items) versus the vanilla system’s 2.856%, and it decreased group unfairness from 1.410% to 0.233% while finding more correct recommendations.

The coauthors acknowledge their algorithm sacrificed some performance for the most active users, but they consider this an acceptable tradeoff considering that it boosted performance for the inactive users. They plan to release it in open source in GitHub in the near future.

“In this work, we study the prominent problem of fairness in the context of state-of-the-art explainable recommendation algorithms over knowledge graphs … It is fairly remarkable that after adopting the fairness-aware algorithm over two recent state-of-the-art baseline methods, we are able to retrieve substantially better recommendation results than the original methods,” the coauthors wrote. “We conjecture that our algorithm better harnesses the potential of current explainable recommendation methods by adapting the path distribution so that more diverse reasoning paths can be served to the users. Since the inactive users undergo a prejudiced treatment due to the homogeneous user interactions with lower user visibility, they tend to benefit more under our fairness-aware algorithm.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.