Comment by orasis
Check out improve.ai if you want to see this taken to the next level. We combined Thompson Sampling with XGBoost to build a multi-armed bandit that learns to choose the best arm across context. MIT license.
Check out improve.ai if you want to see this taken to the next level. We combined Thompson Sampling with XGBoost to build a multi-armed bandit that learns to choose the best arm across context. MIT license.