Comment by dkarl
> I’ve been in companies that have tried dozens if not hundreds of A/B tests with zero statistically significant results.
What I've seen in practice is that some places trust their designers' decisions and only deploy A/B tests when competent people disagree, or there's no clear, sound reason to choose one design over another. Surprise surprise, those alternatives almost always test very close to each other!
Other places remove virtually all friction from A/B testing and then use it religiously for every pixel in their product, and they get results, but often it's things like "we discovered that pink doesn't work as well as red for a warning button," stuff they never would have tried if they didn't have to feed the A/B machine.
From all the evidence I've seen in places I've worked, the motivating stories of "we increased revenue 10% by a random change nobody thought would help" may only exist in blog posts.
I think trusting your designers is probably the way to go for most teams. Good designers have solid intuitions and design principles for what will increase conversion rates. Many designers will still want a/b tests because they want to be able to justify their impact, but they should probably be denied. For really important projects designers should do small sample size research to validate their designs like we would do in the past.
I think a/b tests are still good for measuring stuff like system performance, which can be really hard to predict. Flipping a switch to completely change how you do caching can be scary.