-
- Description:
Over the past decade, contextual bandit algorithms have been gaining in popularity due to their effectiveness and flexibility in solving sequential decision problems---from online advertising and finance to clinical trial design and personalized medicine. At the same time, there are, as of yet, surprisingly few options that enable researchers and practitioners to simulate and compare the wealth of new and existing bandit algorithms in a standardized way. To help close this gap between analytical research and empirical evaluation the current paper introduces the object-oriented \proglang{R} package \pkg{contextual}: a user-friendly and, through its object-oriented design, easily extensible framework that facilitates parallelized comparison of contextual and context-free bandit policies through both simulation and offline analysis.
- Changes to previous version:
Minor update.
- BibTeX Entry: Download
- Supported Operating Systems: Agnostic
- Data Formats: Agnostic
- Tags: Reinforcement Learning, Simulation, Data Generator, Context Aware Recommendation, Bandits, Comparisons
- Archive: download here
Other available revisons
-
Version Changelog Date 0.9.8.4 Minor update.
July 27, 2020, 16:05:32 0.9.8.3 Minor update
March 4, 2020, 21:54:38 0.9.8.2 Minor update
July 9, 2019, 14:22:28 0.9.8 Major update: Offline Bandit API overhaul - now makes use of R formulae. More demo R scripts added. New Contextual Bandits and Policies. Bug fixes.
February 10, 2019, 16:32:57 0.9.1 Initial Announcement on mloss.org.
December 20, 2018, 14:45:40
Comments
No one has posted any comments yet. Perhaps you'd like to be the first?
Leave a comment
You must be logged in to post comments.