Menu

Where A/B testing does (and doesn’t) make sense

Peter Seibel discusses data-driven design at Etsy in his post Building websites with science. After going over the dangers of relying solely on A/B testing for product decisions, he concludes:

Ultimately the goal is to make great products. Great ideas from designers are a necessary ingredient. And A/B testing can definitely improve products. But best is to use both: establish a loop between good design ideas leading to good experiments leading to knowledge about the product leading to even better design ideas. And then allow designers the latitude to occasionally try things that can’t yet be justified by science or even things that my go against current “scientific” dogma.

This echoes Julie Zhuo’s thoughts in The Agony and Ecstasy of Building with Data:

You can’t A/B test your way into big, bold new strategies. Something like the iPhone is impossible to A/B test. If you had asked people or invited them to come into the lab to try some stuff out, they would have preferred a physical keyboard to a virtual one. If you had them use an early prototype of the touch screen where not every gesture registered perfectly, it would have felt bad and tested poorly. […]

Data and A/B test are valuable allies, and they help us understand and grow and optimize, but they’re not a replacement for clear-headed, strong decision-making. Don’t become dependent on their allure. Sometimes, a little instinct goes a long way.

This all relates back to the difference between variation (trying out different ideas) and iteration (small changes to improve an existing idea). A/B testing is great for iteration, but not for variation. For variation we need our brains, and lots of paper and pencils.