Mechanical Turk is Amazon’s army of pieceworkers, ready to help you blend computation with human tasks in web apps.
What I didn’t know is that MTurk is also a powerful tool for testing and refining ideas. I learned this while interviewing Dan Shapiro onstage at the Founder Showcase last week.
Shapiro is a remarkably successful entrepreneur. His second startup, Sparkbuy, was acquired by Google just six months after he launched it.
That’s after a successful go with his first startup, Ontela, which merged with Photobucket in 2009. That company took a relatively pokey four years to arrive at an exit. Of course, by most people’s standards, four years would be plenty fast.
But what makes Shapiro’s approach to starting companies so interesting is the thorough, pragmatic approach he takes to market testing.
“I’m always skeptical when I get too in love with an idea,” Shapiro told me.
So when he had an idea for making it easier to find and compare electronics on e-commerce sites, he turned to Mechanical Turk to test and refine the plan.
(It also helped that a Google business development executive he met on a plane expressed interest in the idea, but “that was just a tiny, positive indicator in the grand scheme of things,” Shapiro said.)
Mechanical Turk, a project Amazon.com started in 2005, is a brilliant fusion of human labor and programmatic computation. Using it, you can incorporate human effort into your web-based software simply by making an API call. It’s no surprise that entrepreneurs are excited about using MTurk as a low-cost way of recruiting help, particularly for repetitive tasks.
But it’s also a great, low-cost tool for doing surveys, and that’s exactly what Shapiro did.
The first part of his surveys is always the set of eight questions from the U.S. Census. That helps him determine demographics and figure out how “normal” his respondents are.
Then he follows up by asking them a ton of questions.
First, Shapiro asked 100 people to describe a laptop as if their friend was going to buy it for them. Then he analyzed the responses, categorized all the words they used, and did a second survey to measure how important each of those words were. After that, he did follow-up interviews.
What Shapiro found was that the #1 criterion for laptop shoppers was price (no surprise there). But the #2 criterion was quantity of RAM, which was a bit surprising because it is an unusually geeky metric. Who really cares how much RAM their notebook has, after all, except really techie people? After doing some interviews, he realized that what people really wanted was speed, but there was no way on electronics sites to specify “I want a laptop that’s fast enough to run PhotoShop.”
Using these answers from a series of surveys, Shapiro was able to craft a business plan for a company that would let you shop for laptops based on criteria people actually care about, such as the ability to run PhotoShop, or weight, or color. What’s more, he knew from his market research that these were the criteria customers would be most likely to respond to, so his business idea was essentially pre-tested.
“I love MTurk,” Shapiro said.
He also used MTurk in the course of business, not just for business plan testing. For example, Sparkbuy’s database of laptop attributes was built in part by an army of “Turkers.” And at Ontela, he’d put out surveys with 100 or more questions about the wireless industry, using them as a valuable market research tool.
The price is almost ridiculously low. Shapiro said he would pay about 26 cents apiece for people to answer these 100-question surveys.
Shapiro’s not a solitary genius — others, particularly academics, have discovered the value of using MTurk in research. In 2009, someone named Alex Frakking described in detail how he used Mechanical Turk for conducting surveys. He paid a bit more: about 3 cents per survey question, in an attempt to keep the hourly rate between $8 and $12.
Frakking makes an interesting point, which is that the very people who fill out your survey on MTurk might turn into some of your earliest customers. You can make that easier by letting them opt-in to a mailing list so you can contact them when you launch. “In the last big survey I did, about 20 percent of respondents gave their email for just that purpose, meaning the survey can pay for itself in leads,” Frakking concludes.
Are Mechanical Turk surveys statistically valid? Absolutely — or at least as valid as phone or website surveys.
“The funny thing is,” Shapiro told me onstage, “if you actually look at the methodologies behind the way everyone else does it, it’s just the same.”
In a 2010 study, researchers compared surveys done with MTurk to those done using the traditional sociological pool, Midwestern university students, and with people found on Internet discussion boards. MTurk compared favorably.
The study concluded “experimenters should consider Mechanical Turk as a viable alternative for data collection,” although it warned that subjects are susceptible to the same kinds of experimental bias found in other arenas. The takeaway: Design your surveys carefully.
Also, the authors warn, unlike undergraduates, MTurk workers aren’t replaced with a new crop every few years, so there’s the potential for long-term relationships between surveyers and those surveyed. So don’t be a jerk: Treat your survey respondents right and they’ll be there for you, potentially for years.
For people who are interested in following Shapiro’s lead, there’s an open-source set of tools for doing MTurk surveys, called Lime Survey. And IT World published a detailed list of tips on running experiments or surveys on MTurk.
The rest of my discussion with Shapiro covered topics such as who should raise venture capital (not everyone), his experiences selling Sparkbuy and merging Ontela and Photobucket, and his thoughts on crowdfunding. It’s worth a listen. The whole 30-minute interview is below.