Bandit Algorithms by Csaba Szepesvári and Tor Lattimore (2020, Hardcover)

AlibrisBooks (461370)
98.6% positive Feedback
Price:
US $60.67
Approximately£45.83
+ $15.32 postage
Estimated delivery Mon, 18 Aug - Mon, 25 Aug
Returns:
30 days return. Buyer pays for return postage. If you use an eBay delivery label, it will be deducted from your refund amount.
Condition:
New
New Hard cover

About this product

Product Identifiers

PublisherCambridge University Press
ISBN-101108486827
ISBN-139781108486828
eBay Product ID (ePID)8038912590

Product Key Features

Number of Pages536 Pages
Publication NameBandit Algorithms
LanguageEnglish
SubjectGeneral, Computer Vision & Pattern Recognition
Publication Year2020
TypeTextbook
AuthorCsaba Szepesvári, Tor Lattimore
Subject AreaMathematics, Computers
FormatHardcover

Dimensions

Item Height1.3 in
Item Weight37.7 Oz
Item Length9.9 in
Item Width7.2 in

Additional Product Features

Intended AudienceScholarly & Professional
LCCN2019-053276
Dewey Edition23
Reviews'This year marks the 68th anniversary of 'multi-armed bandits' introduced by Herbert Robbins in 1952, and the 35th anniversary of his 1985 paper with me that advanced multi-armed bandit theory in new directions via the concept of 'regret' and a sharp asymptotic lower bound for the regret. This vibrant subject has attracted important multidisciplinary developments and applications. Bandit Algorithms gives it a comprehensive and up-to-date treatment, and meets the need for such books in instruction and research in the subject, as in a new course on contextual bandits and recommendation technology that I am developing at Stanford.' Tze L. Lai, Stanford University
IllustratedYes
Dewey Decimal519.3
Table Of Content1. Introduction; 2. Foundations of probability; 3. Stochastic processes and Markov chains; 4. Finite-armed stochastic bandits; 5. Concentration of measure; 6. The explore-then-commit algorithm; 7. The upper confidence bound algorithm; 8. The upper confidence bound algorithm: asymptotic optimality; 9. The upper confidence bound algorithm: minimax optimality; 10. The upper confidence bound algorithm: Bernoulli noise; 11. The Exp3 algorithm; 12. The Exp3-IX algorithm; 13. Lower bounds: basic ideas; 14. Foundations of information theory; 15. Minimax lower bounds; 16. Asymptotic and instance dependent lower bounds; 17. High probability lower bounds; 18. Contextual bandits; 19. Stochastic linear bandits; 20. Confidence bounds for least squares estimators; 21. Optimal design for least squares estimators; 22. Stochastic linear bandits with finitely many arms; 23. Stochastic linear bandits with sparsity; 24. Minimax lower bounds for stochastic linear bandits; 25. Asymptotic lower bounds for stochastic linear bandits; 26. Foundations of convex analysis; 27. Exp3 for adversarial linear bandits; 28. Follow the regularized leader and mirror descent; 29. The relation between adversarial and stochastic linear bandits; 30. Combinatorial bandits; 31. Non-stationary bandits; 32. Ranking; 33. Pure exploration; 34. Foundations of Bayesian learning; 35. Bayesian bandits; 36. Thompson sampling; 37. Partial monitoring; 38. Markov decision processes.
SynopsisDecision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes., Decision-making in the face of uncertainty is a challenge in machine learning, and the multi-armed bandit model is a common framework to address it. This comprehensive introduction is an excellent reference for established researchers and a resource for graduate students interested in exploring stochastic, adversarial and Bayesian frameworks.
LC Classification NumberQA402.5.L367 2020

All listings for this product

Buy it now
Any condition
New
Pre-owned
No ratings or reviews yet
Be the first to write a review