Pinterest today shared details about how it created PinSage, a graph convolutional network that can learn about things like nearby Pins, or nodes, in massive web-scale graphs.
Pinterest began to use PinSage for ad recommendations in February then used more broadly for things like shopping recommendations in June, a company spokesperson told VentureBeat in an email.
With 3 billion nodes and 18 billion edges, Pinterest engineers believe PinSage is 10,000 times larger than typical graph convolutional networks.
In a similar way to Google’s Word2Vec, PinSage learns recommendations by borrowing information from nearby Pins in a graph, using both visuals and text descriptions of photos.
The approach taken by PinSage has led to a 25 percent increase in impressions for Shop the Look, a feature that lets Pinterest users buy clothes seen in Pins.
Methods used to improve random sampling of nearby Pins also led to a 46 percent performance gain over traditional random sampling methods for graphs.
When compared to previously used methods at Pinterest, PinSage was found to provide better recommendations than Pinterest’s Pixie recommendation engine as well as systems for recommendations based on image descriptions or visual content.
“Our model relies on this graph information to provide the context and allows us to disambiguate Pins that are (visually) similar, but semantically different,” Pinterest Labs research scientist Ruining He said in a blog post. “To our knowledge, this is the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.”
PinSage comes from Pinterest Labs, a group formed in 2017 by Stanford computer science professor and Pinterest chief scientist Jure Leskovec.
PinSage performance results were published online in Arxiv in June and will be shared at the SIGKDD conference next week in London.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here