At this point, people know a lot about Snap Inc. — the company behind Snapchat, the mobile app for sharing videos, messaging with friends, and perusing content from media outlets, and Spectacles, those stylish glasses with a camera for capturing video that can be shared in Snapchat. Now that it has filed its paperwork to go public, the company is less of a mystery than ever.
But there’s a part of Snap Inc. that’s still poorly understood: its research division, whose existence I first detailed in 2015.
Thankfully, in the Snap IPO filing the company did disclose some things, like how much it spends on research, and a high-level overview of the purpose of its research arm.
“Our research and development efforts focus on product development, advertising technology, and large-scale infrastructure,” the company said in the S-1 document submitted to the U.S. Securities and Exchange Commission.
We work relentlessly to improve our product offerings by creating and improving products for our users, partners, and advertisers. We design products that create and enhance camera experiences, and new technologies are at the core of many of these opportunities.
We constantly develop and expand our advertising products, delivery framework, and measurement capabilities. We believe relevant and engaging advertising tends to be more effective and is better for both our community and advertising partners. Our technology roadmap centers on efficient advertising delivery, sophisticated buying and campaign optimization, and measurement solutions for our advertisers.
With respect to infrastructure, the company pointed out that it currently depends on Google to operate its hosting infrastructure. Its researchers could well be exploring systems for the company to use after the end of Snap’s five-year deal with Google.
Facebook, Google, and Microsoft disclose lots of information about their research teams and priorities. Snap is downright secretive by comparison. Very little source code is shared publicly. Patents mostly (but not completely) cover features that are already live in the Snapchat app. Research scientists don’t often publish academic papers or participate in competitions. Given that, it does seem that the five-year-old company’s research activity is more oriented toward product development than basic research.
What is certain is that Snap has been staffing up the group. LinkedIn data provides a sense of where people are coming from — some from universities like Carnegie Mellon, some from corporate research organizations like Qualcomm Research, and a few straight out of Google. LinkedIn data suggests that more than 30 people populate Snap’s research group and that nearly all of them are based in Los Angeles, where Snap is headquartered.
But retention could prove to be a problem over time. Jia Li, who was tapped to lead the organization in February 2015, decamped to Google in November to co-lead the search company’s Cloud Machine Learning group. Li’s successor has not been announced.
And the IPO filing discloses an unusual compensation-related matter in Snap’s research group. “Research and development expenses in the third quarter of 2016 were higher than the previous quarters presented due to stock-based compensation expense from the modification of the terms of RSUs granted to one employee,” the company said. The wording isn’t perfectly clear, but it may be a way of communicating that an employee asked for more money after being offered a high salary at another company. That wouldn’t be surprising, given the high salaries that some companies are offering top talent, particularly in the field of deep learning, a type of artificial intelligence (AI). Snapchat employs many researchers who focus on this area.
It’s also not very clear what technologies Snap primarily uses for AI. But it does seem that at times the team members have been open to working with existing technologies, as opposed to always insisting on using their own proprietary systems. In November Li, Snap researchers Linjie Luo and Ning Zhang and University of Toronto Ph.D. student Shenlong Wang published a paper detailing AutoScaler, a system that can help discover connections between multiple related images. Shortly after that, Wang, who was a research intern at Snap last summer, published the code for AutoScaler on Bitbucket.
In doing so, Wang showed that the code relies on TensorFlow, the deep learning framework that Google open-sourced in 2015.
Snap did not respond to a request for comment.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more