The freemium business model, where you give away a service or app for free and charge a subscription fee to those who want premium features, has taken off everywhere. But Ben Chestnut, chief executive of the email marketing firm MailChimp, warns that the freemium model leads to a very unwanted side effect: a tripling of reported abuse cases.
Chestnut was speaking at last Friday’s Freemium Summit. MailChimp lets anyone send email newsletters to customers, manage subscriber lists, and track the performance of an email marketing campaign. It gives marketers powerful tools — segmenting lists, a/b testing where marketers can experiment with one choice or another, and return-on-investment tracking. Since it uses an open applications programming interface, it can easily link to data from apps such as Salesforce.com or Drupal. But after years of experimenting with various strategies, the company had only 85,000 paying subscribers by 2009.
Then the company embraced a freemium model, where it would give away the tool and charge subscription fees for special features. It launched the free version on Sept. 1, 2009 with a “Power to the People” campaign. The number of users shot up 240 percent to 290,000 within seven months. Emails sent soared from 200 million a month to 450 million.
But that’s where the abuse started. Spammers flocked to the free service, and abuse-related issues grew 354 percent. The staff to deal with abuse rose 200 percent, and legal costs grew 245 percent. At one point, Chestnut said he had to decide whether to hire 30 new customer support representatives. The company only had 38 people on staff, so customer support would have become the biggest department, and the rapid hiring would likely have ruined the company’s culture.
Chestnut said that the company’s approach to abuse was to deal with it when it saw it, rather than to anticipate it (hence, he used the slide to the right in his presentation at the Freemium Summit).
The real problem wasn’t “spammy spam,” such as “get rich quick” or “penis enlargement scams.” Software such as Spam Assassin can easily block those threats. Rather, the real problem was “fuzzy spam,” where the spammers did a good job disguising their efforts. Fuzzy spam is, by definition, much harder to detect. Chestnut’s team studied other corporate web sites and found that almost every site had to deal with the same issue.
The abuse bridge was actually more like the one pictured at the right, Chestnut said. So the company started Project Omnivore. It studied bad emails over the past eight years and found a way to create a “genetic optimization algorithm” to identify the traits of emails and accounts where abuse occurred. But the software crashed the company’s servers and never completed scans of 10 years’ worth of emails. So the team bought an Nvidia Tesla-based supercomputer. The supercomputer was able to complete a test scan in two hours and showed the algorithm could predict bad emails accurately.
Then the team dispatched the serious number crunching to run on Amazon’s EC2 server cloud. The simulation chewed up an enormous amount of computing time, sifting through more than 61 trillion examples. Then the company put the technology in place to relieve the customer support team.
Thanks to the automatic software, the company has sent 35,539 warnings, suspended 4,233 accounts, and shut down 1,193 users. That was all in the last seven months. Omnivore also turned out to be useful in predicting positive trends. It can, for instance, predict the odds that users will open a given email in a marketing campaign. It comes up with predictions, saying for instance, that users are 20 percent to 40 percent likely to open an email. If you don’t anticipate the abuse and plan for it, you’ll probably end up like the slide on the right, Chestnut said.
[slides: courtesy of @benchestnut; here’s a link to his full set of slides].
VB's research team is studying web-personalization... Chime in here, and we’ll share the results.