Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

Some third-party Facebook apps could be misusing user data for ransomware, spam, and targeted advertising, according to a study by researchers at the University of Iowa. Their work, which was accepted to the Privacy Enhancing Technologies Symposium (PETS), used a tool called CanaryTrap in conjunction with Facebook’s ad transparency tool to detect unrecognized uses of users’ personal data.

Facebook hosts countless third-party apps that have access to potentially billions of accounts containing information like email addresses, dates of birth, gender, and likes. Making matters worse, it’s difficult to detect data misuse by these apps because they store data on servers beyond the purview of Facebook itself.

The coauthors of the study developed CanaryTrap to bring light to this, a tool that employs “honeytokens” containing monitored email accounts to detect unauthorized data use. First CanaryTrap shares a honeytoken with a third-party app, and then the researchers identify advertisers who shared the honeytokens. Advertisers on Facebook can use email addresses to target ads to custom audiences, a capability the coauthors exploited by checking whether advertisers could be recognized as the target apps. If they couldn’t, the researchers’ assumption was that the address (or addresses) had been misused.

“Prior experience of our group in studying abuse of third-party apps on online social networks keeps us motivated to investigate potential risks posed by third-party app apps to users,” coauthor Shehroze Farooqi told VentureBeat via email. “In the past few years, we came across several high-profile incidents of data misuse by third-party apps (such as CA scandal). Our review of prior literature showed that existing research on this topic lack methods to systematically detect potential misuse of user data by third-party apps. This motivated us to develop a methodology that can detect misuse of user data shared with third-party apps.”


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

Because Facebook’s anti-abuse system thwarts bulk account registration and limits the ability to frequently rotate the addresses associated with accounts, scaling CanaryTrap required designing two frameworks: an array framework and a matrix framework. The array framework rotated addresses while maintaining one-to-one mapping between shared honeytokens and apps, while the matrix framework attributed the app responsible for data misuse while sharing a honeytoken to multiple apps.

Over the course of more than a year, the coauthors applied CanaryTrap to 1,024 third-party Facebook apps. Since Facebook doesn’t provide an index of third-party apps, they sourced a database of 25,800 email address-requesting apps compiled by other researchers, of which they randomly selected the 1,024.

The research team then set up an email server and used a list of popular names to create accounts adhering to the “” template (e.g., Next, they registered three Facebook accounts in total, setting the privacy settings such that the accounts’ information, including email addresses, remained private to everyone except for the installed apps.

Sixteen third-party apps shared addresses with unrecognized senders out of the 1,024, according to the coauthors. Of these, nine apps had a disclosed relationship with the senders, which were typically external services (e.g., user authentication services), partner or affiliate websites, or companies that acquired the Facebook app. The remaining seven had an unknown relationship, meaning the senders potentially had access to the user’s data through breaches or leakages on the app’s servers or through secret data-sharing deals.

Sixteen apps out of 1,024 might not sound like a lot. But extrapolating out to the tens of thousands of third-party apps available through Facebook, the implication is that there could be many thousands of apps misusing emails and other personal data.

These are the 16 apps:

  • Safexbikes Motorcycle Superstore
  • WeWanted
  • Printi BR API
  • JustFashionNow
  • PopJulia
  • MyJapanBox
  • Nyx CA
  • Tom’s Hardware Guide-IT Pro (since deactivated)
  • Alex’s first app
  • Thailand Property Login
  • Hop-on, Hop-Off
  • Leiturinha
  • The Breast Expansion Story Club
  • Jacky’s Electronics
  • Login

The researchers report that three of the apps were responsible for 76 malicious emails, including ransomware scams and Viagra spam. Nine of the apps could be linked to 79 “unrelated” emails including promotional offers, links to product listings, and newsletters — a possible violation of Facebook’s Terms of Service, which requires that apps clearly notify users about data usage by other parties. And two of the apps — Safexbikes Motorcycle Superstore and Printi BR API — showed anecdotal evidence that their host sites were breached.

“To date, we have not received any disclosure from any of these apps’ host websites about a data breach,” the coauthors wrote, noting that six out of the 1,024 apps they analyzed lacked any kind of privacy policy.

After they deployed CanaryTrap, the researchers used Facebook’s ad transparency tool to identify 47 unique advertisers that uploaded honeytoken email addresses for ad targeting. Nine were unrecognized, indicating that none of the apps disclosed a relationship with the senders.

In the interest of thoroughness, the researchers attempted to contact 100 app publishers out of those that sent emails. After emailing 87 successfully — 13 couldn’t be reached due to website and delivery errors — they received responses from 45 (52%) of the publishers. Only 29 of those acknowledged they had deleted data or canceled accounts. Of more concern is that 49 out of the 87 continued to send at least one email after the submission of the coauthors’ data deletion request.

“The process to request data deletion is hard to navigate for a lay user. Facebook currently does not play any active part in the data deletion process,” the coauthors wrote. “Facebook completely relies on third-party app developers to fulfill users’ data deletion requests … many apps use cookie-cutter policies that do not comply with Facebook’s Terms of Service. It is noteworthy that even when apps provide a compliant privacy policy, Facebook does not have a sound mechanism to check whether the apps are actually in compliance.”

In light of their findings, the researchers argue Facebook should mandate that developers implement data deletion request callback into their apps, which would be a user-friendly mechanism for requesting deletion that could help the network audit compliance. “Third-party apps on online social networks with access to users’ personal information pose a serious privacy threat,” they said.

Facebook has a poor track record of preventing apps from improperly accessing users’ data. In 2018, the Guardian revealed that data analytics company Cambridge Analytica improperly obtained the information of up to 87 million Facebook users through a paid personality quiz. Facebook suspended Cambridge Analytica and SCL Group, its parent company, from the platform in mid-March of 2018, after the former used the data to create “psychological profiles” of U.S. voters for ad targeting.

In June 2018, Facebook announced that a bug had resulted in about 14 million Facebook users having their default sharing setting for all new posts set to “public.” And in April 2019, half a billion records of Facebook users were found exposed on Amazon cloud servers, containing information about users’ friends, likes, groups, and checked-in locations, as well as names, passwords, and email addresses.

In response to the Cambridge Analytica scandal and others, last July the U.S. Federal Trade Commission (FTC) imposed sweeping new privacy restrictions on Facebook, including a mandate to suspend third-party apps that don’t certify compliance with the company’s platform policies.

On Wednesday, Facebook announced updates to its Platform Terms and Developer Policies, set to enter into effect on August 31, 2020. The new terms will limit the information developers can share with third parties without receiving explicit consent from users, and also ensure developers clearly understand they have a responsibility to safeguard Facebook user data.

“Our study discovers the misuse of user data shared with third-party apps on Facebook since we only implement CanaryTrap for Facebook,” Shehroze said. “It is possible that the potential misuse of user data is happening on other platforms like Twitter and Instagram as well as various Google products (such as Gmail and GSuite marketplace). Our existing implementation of CanaryTrap can be modified with reasonably minimal changes to monitor misuse of user data on other platforms as well. We believe that our approach can not only be adopted by these platforms but also independent watchdogs or regulators like FTC  to monitor misuse of user data by third-party apps.”

We reached out to Facebook for comment on the research and whether the policy changes address the loopholes discovered by the researchers. A spokesperson said the company is reviewing the findings — we’ll update this post once we hear back.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.