A Tutorial on Thompson Sampling

Publication Date:  
Jul 2018

9781680834703

Click here for information on taxes/customs duties

Covers the Thompson sampling algorithm and its application, illustrating concepts through a range of examples, including Bernoulli bandit problems, shortest path problems, product recommendation, assortment, active learning with neural networks, and reinforcement learning in Markov decision processes.

Thompson sampling is an algorithm for online decision problems where actions are taken sequentially in a manner that must balance between exploiting what is known to maximize immediate performance and investing to accumulate new information that may improve future performance. The algorithm addresses a broad range of problems in a computationally efficient manner and is therefore enjoying wide use.

A Tutorial on Thompson Sampling covers the algorithm and its application, illustrating concepts through a range of examples, including Bernoulli bandit problems, shortest path problems, product recommendation, assortment, active learning with neural networks, and reinforcement learning in Markov decision processes. Most of these problems involve complex information structures, where information revealed by taking an action informs beliefs about other actions. It also discusses when and why Thompson sampling is or is not effective and relations to alternative algorithms.

  • 1. Introduction
  • 2. Greedy Decisions
  • 3. Thompson Sampling for the Bernoulli Bandit
  • 4. General Thompson Sampling
  • 5. Approximations
  • 6. Practical Modeling Considerations
  • 7. Further Examples
  • 8. Why it Works, When it Fails, and Alternative Approaches
  • Acknowledgements
  • References
Pages112
Date Published30 Jul 2018
PublisherNow Publishers
SeriesFoundations and Trends® in Machine Learning
Series Part34
LanguageEnglish
Dimensions233 x 155 x 6