Comment by crazygringo

Comment by crazygringo 6 days ago

15 replies

No, multi-armed bandit doesn't "beat" A/B testing, nor does it beat it "every time".

Statistical significance is statistical significance, end of story. If you want to show that option B is better than A, then you need to test B enough times.

It doesn't matter if you test it half the time (in the simplest A/B) or 10% of the time (as suggested in the article). If you do it 10% of the time, it's just going to take you five times longer.

And A/B testing can handle multiple options just fine, contrary to the post. The name "A/B" suggests two, but you're free to use more, and this is extremely common. It's still called "A/B testing".

Generally speaking, you want to find the best option and then remove the other ones because they're suboptimal and code cruft. The author suggests always keeping 10% exploring other options. But if you already know they're worse, that's just making your product worse for those 10% of users.

LPisGood 6 days ago

Multi-arm bandit does beat A/B testing in the sense that standard A/B testing does not seek to maximize reward during the testing period, MAB does. MAB also generalizes better to testing many things than A/B testing.

  • cle 6 days ago

    This is a double-edged sword. There are often cases in real-world systems where the "reward" the MAB maximizes is biased by eligibility issues, system caching, bugs, etc. If this happens, your MAB has the potential to converge on the worst possible experience for your users, something a static treatment allocation won't do.

    • LPisGood 6 days ago

      I haven’t seen these particular shortcomings before, but I certainly agree that if your data is bad, this ML approach will also be bad.

      Can you share some more details about your experiences with those particular types of failures?

      • cle 6 days ago

        Sure! A really simple (and common) example would be a setup w/ treatment A and treatment B, your code does "if session_assignment == A .... else .... B" . In the else branch you do something that for whatever reason causes misbehavior (perhaps it sometimes crashes or throws an exception or uses a buffer that drops records under high load to protect availability). That's suprisingly common. Or perhaps you were hashing on the wrong key to generate session assignments--ex you accidentally used an ID that expires after 24 hours of inactivity...now only highly active people get correctly sampled.

        Another common one I saw was due to different systems handling different treatments, and there being caching discrepancies between the two, like esp in a MAB where allocations are constantly changing, if one system has a much longer TTL than the other then you might see allocation lags for one treatment and not the other, biasing the data. Or perhaps one system deploys much more frequently and the load balancer draining doesn't wait for records to finish uploading before it kills the process.

        The most subtle ones were eligibility biases, where one treatment might cause users to drop out of an experiment entirely. Like if you have a signup form and you want to measure long-term retention, and one treatment causes some cohorts to not complete the signup entirely.

        There are definitely mitigations for these issues, like you can monitor the expected vs. actual allocations and alert if they go out-of-whack. That has its own set of problems and statistics though.

  • crazygringo 6 days ago

    No -- you can't have your cake and eat it too.

    You get zero benefits from MAB over A/B if you simply end your A/B test once you've achieved statistical significance and pick the best option. Which is what any efficient A/B test does -- there no reason to have any fixed "testing period" beyond what is needed to achieve statistical significance.

    While, to the contrary, the MAB described in the article does not maximize reward -- as I explained in my previous comment. Because the post's version runs indefinitely, it has worse long-term reward because it continues to test inferior options long after they've been proven worse. If you leave it running, you're harming yourself.

    And I have no idea what you mean by MAB "generalizing" more. But it doesn't matter if it's worse to begin with.

    (Also, it's a huge red flag that the post doesn't even mention statistical significance.)

    • LPisGood 6 days ago

      > you can't have your cake and eat it too

      I disagree. There is a vast array of literature on solving the MAB problem that may as well be grouped into a bin called “how to optimally strike a balance between having one’s cake and eating it too.”

      The optimization techniques to solve MAB problem seek to optimize reward by giving the right balance of exploration and exploitation. In other words, these techniques attempt to determine the optimal way to strike a balance between exploring if another option is better and exploiting the option currently predicted to be best.

      There is a strong reason this literature doesn’t start and end with: “just do A/B testing, there is no better approach”

      • crazygringo 6 days ago

        I'm not talking about the literature -- I'm talking about the extremely simplistic and sub-optimal procedure described in the post.

        If you want to get sophisticated, MAB properly done is essentially just A/B testing with optimal strategies for deciding when to end individual A/B tests, or balancing tests optimally for a limited number of trials. But again, it doesn't "beat" A/B testing -- it is A/B testing in that sense.

        And that's what I mean. You can't magically increase your reward while simultaneously getting statistically significant results. Either your results are significant to a desired level or not, and there's no getting around the number of samples you need to achieve that.

      • cauch 6 days ago

        Another way of seeing the situation: let run your MAB solution for a while. Orange has been tested 17 times and blue has been tested 12 times. This is exactly equivalent of doing a A/B testing where you display 1 time the orange button to 17 persons and 1 time the blue button to 12 persons.

        The trick is to find the exact best number of test for each color so that we have good statistical significance. MAB does not do that well, as you cannot easily force testing an option that was bad when this option did not get enough trial to have a good statistical significance (imagine you have 10 colors and the color orange first score 0/1. It will take a very long while before this color will be re-tested quite significantly: you need to first fall into the 10%, but then you still have ~10% to randomly pick this color and not one of the other). With A/B testing, you can do a power analysis before hand (or whenever during) to know when to stop.

        Literature does not start with "just do A/B testing" because it is not the same problem. In MAB, your goal is not to demonstrate that one is bad, it's to do your own decision when faced with a fixed situation.

  • ertdfgcvb 5 days ago

    Isn't that the point of testing (to not maximize reward but rather wait and collect data)? It sounds like maximizing reward during the experiment period can bias the results