Comment by randomcatuser

Comment by randomcatuser 6 days ago

3 replies

Yeah basically. The idea is that somehow this is the data-optimal way of determining which one is the best (rather than splitting your data 50/50 and wasting a lot of samples when you already know)

The caveats (perhaps not mentioned in the article) are: - Perhaps you have many metrics you need to track/analyze (CTR, conversion, rates on different metrics), so you can't strictly do bandit! - As someone mentioned below, sometimes the situation is dynamic (so having evenly sized groups helps with capturing this effect) - Maybe some other ones I can't think of?

But you can imagine this kind of auto-testing being useful... imagine AI continually pushes new variants, and it just continually learns which one is the best

cle 6 days ago

It still misses the biggest challenge though--defining "best", and ensuring you're actually measuring it and not something else.

It's useful as long as your definition is good enough and your measurements and randomizations aren't biased. Are you monitoring this over time to ensure that it continues to hold? If you don't, you risk your MAB converging on something very different from what you would consider "the best".

When it converges on the right thing, it's better. When it converges on the wrong thing, it's worse. Which will it do? What's the magnitude of the upside vs downside?

  • desert_rue 5 days ago

    Are you saying that it may do something like improve click-the-button conversion but lead to less sales overall?

iforgot22 6 days ago

Facebook or YouTube might already be using an algo like this or AI to push variants, but for each billion user product, there are probably thousands of smaller products that don't need something this automated.