Presented by Optimizely
Time for product developers to ditch the guesswork. Experimentation is an incredibly powerful technique that optimizes your products in real time and transforms your company’s offerings. Learn how to develop your own blueprint for experimentation success when you join this VB Live event.
It’s a too-familiar scenario in waterfall product development, where a product is developed over months or years based on best guesses, and the end result can be a (very public) disaster.
“At the end of this process, you can go to launch your big new product that you’re so excited about, only to realize in horror that your users hate it,” says Jon Noronha, Director of Product Management at Optimizely. “I experienced this firsthand — I was a product manager at Microsoft, and I joined Microsoft just after the launch of Windows Vista.”
Windows Vista is the most iconic example of the very common waterfall product launch — and you might remember how well it went over with users.
Those sorts of challenges and problems are what’s led a lot of companies to embrace this larger mindset of agile product development, which is frequent small iterations, doing things in chunks. But even when people talk about agile, they’re often simply thinking of these two-week sprints, or task-tracking, and so on, which still doesn’t get at the real spirit of being agile.
“If you’re working on a three-year product cycle, don’t pat yourself on the back — you’re not agile,” Noronha says. “The real spirit of being agile is all about actually exposing what you’re working on to real users for feedback as quickly as possible, so you can learn and adapt.”
What kinds of experiments should your team be running, because they save you an extraordinary amount of time and expense? Experimentation is a broad umbrella that covers a number of powerful techniques that big tech companies are famous for using, including Facebook, Google, and yes, even Microsoft.
It includes things like the painted door or fake door experiment, where rather than putting a year into building a new feature, you actually just put in the bare minimum time to see if the idea is even viable.
For example, the developers for the Guardian were charged with building a “save to mobile” button on their news site. But the team was skeptical based on what they knew of their users, so they ran the painted door experiment. When you clicked the save to mobile button it said, “This feature is coming soon.” They ran that and they found that essentially nobody ever clicked on it — so they avoided spending a whole lot of time and money building the thing only to have it fail.
Facebook is famous for another powerful experimentation technique, the staged rollout where you launch first to a very small percent of the user base. It frequently pushes out its new features to a place like New Zealand to begin with since it’s a small, isolated, English-speaking region where tests can be done to get things right before rolling out to other countries.
And, of course, there’s the traditional A/B test. Companies like Google are running thousands of A/B tests across the UX at any one time. Sometimes companies do this on very cosmetic changes and Google made this famous when they once tested 36 shades of the color blue in their links against each other. But the best A/B tests are not that cosmetic. They’re much more focused on the core functionality.
The point of A/B testing is quantifying the impact of a change. When Noronha was working on Bing, they knew that Google was a little bit faster but weren’t sure if this resulted in users switching from one site to the other. To avoid investing significant engineering investment upfront, they tested by going in the opposite direction: they artificially slowed down the site for a percentage of users. The results confirmed suspicions — for every 100 milliseconds they slowed down the site, there was a corresponding drop-off in engagement, in advertising revenue, and more.
“We could see, in very concrete terms, the millions of dollars that performance was costing us, which then let us build a whole performance engineering team to combat the problem,” explains Noronha.
When you get an experiment right, the results are astonishing, but when you get something wrong, it can be equally dramatic, only in the worst way. Even very experienced companies misstep — someone at Amazon once pushed some new code which accidentally changed the prices of all products to just one penny, delighting consumers.
However, all of this experimentation is also only as strong as the metrics you’re using to judge your experiments, Noronha says.
“People often get very excited about the idea of A/B testing, but they’re unaware of some of those hidden pitfalls around choosing a success criteria for an experiment,” he says. “I think you can actually pin it on a lot of the missteps that you see some of these big technology companies making.”
For example, we’ve all seen the reports on how a focus on pure engagement has led Facebook into a lot of trouble over the last few years. But experimentation is worth it, from the major gains you make to the cost savings, to even the morale in your department.
“When you have a shared responsibility for driving an outcome, whether it’s getting people to spend time using your app when you have an engagement problem, or increasing donations to a charity website, you’ll have a much more engaged team that works better together,” says Noronha.
Now is the time to launch an experimentation program to transform your products and services as well as your teamwork. Learn how to create your own experimentation blueprints, choose the right metrics, avoid the missteps that even major companies can make, and more when you join this VB Live event!
Don’t miss out!
You’ll learn about:
- The most common mistakes product teams make when running experiments
- Which metrics correlate best with your business’s success
- Strategies to scale experimentation across multiple teams and squads
- How the world’s top technology companies are able to experiment on all product decisions
- Jon Noronha, Director of Product Management at Optimizely