Maximizing and satisficing in multi-armed bandits with graph information
Abstract
Supplementary Material
- Download
- 744.42 KB
References
Index Terms
- Maximizing and satisficing in multi-armed bandits with graph information
Recommendations
Ballooning Multi-Armed Bandits
AAMAS '20: Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent SystemsWe introduce ballooning multi-armed bandits (BL-MAB), a novel extension to the classical stochastic MAB model. In the BL-MAB model, the set of available arms grows (or balloons) over time. The regret in a BL-MAB setting is computed with respect to the ...
Budgeted Combinatorial Multi-Armed Bandits
AAMAS '22: Proceedings of the 21st International Conference on Autonomous Agents and Multiagent SystemsWe consider a budgeted combinatorial multi-armed bandit setting where, in every round, the algorithm selects a super-arm consisting of one or more arms. The goal is to minimize the total expected regret after all rounds within a limited budget. Existing ...
Fair algorithms for multi-agent multi-armed bandits
NIPS '21: Proceedings of the 35th International Conference on Neural Information Processing SystemsWe propose a multi-agent variant of the classical multi-armed bandit problem, in which there are N agents and K arms, and pulling an arm generates a (possibly different) stochastic reward for each agent. Unlike the classical multi-armed bandit problem, ...
Comments
Information & Contributors
Information
Published In
- Editors:
- S. Koyejo,
- S. Mohamed,
- A. Agarwal,
- D. Belgrave,
- K. Cho,
- A. Oh
Publisher
Curran Associates Inc.
Red Hook, NY, United States
Publication History
Qualifiers
- Research-article
- Research
- Refereed limited
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0