skip to main content
research-article

Admission control policies for a multi-class QoS-aware service oriented architecture

Published: 09 April 2012 Publication History

Abstract

In the service computing paradigm, a service broker can build new applications by composing network-accessible services offered by loosely coupled independent providers. In this paper, we address the problem of providing a service broker, which offers to prospective users a composite service with a range of different Quality of Service (QoS) classes, with a forward-looking admission control policy based on Markov Decision Processes (MDP). This mechanism allows the broker to decide whether to accept or reject a new potential user in such a way to maximize its gain while guaranteeing non-functional QoS requirements to its already admitted users. We model the broker using a continuous-time MDP and consider various techniques suitable to solve both infinite-horizon and finitehorizon MDPs. To assess the effectiveness of the MDP-based admission control for the service broker, we present simulation results where we compare the optimal decisions obtained by the analytical solution of the MDP with other admission control policies. To deal with large problem instances, we also propose a heuristic policy for the MDP solution.

References

[1]
E. Altman. Applications of Markov decision processes in communication networks. In Handbook of Markov Decision Processes: Methods and Applications. Kluwer, 2002.
[2]
D. Ardagna and B. Pernici. Adaptive service composition in flexible processes. IEEE Trans. Softw. Eng., 33(6), 2007.
[3]
H. Bannazadeh and A. Leon-Garcia. Online optimization in application admission control for service oriented systems. In Proc. 2008 IEEE Asia-Pacific Services Computing Conf., APSCC '08, pages 482--487, Dec. 2008.
[4]
M. Beckmann and R. Subramanian. Optimal replacement policy for a redundant system. OR Spectrum, 6(1), 1984.
[5]
A. Bellucci, V. Cardellini, V. Di Valerio, and S. Iannucci. A scalable and highly available brokering service for SLA-based composite services. In Proc. 8th Int'l Conf. on Service Oriented Computing, ICSOC'10, 2010.
[6]
D. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.
[7]
M. Bichler and T. Setzer. Admission control for media on demand services. Service Oriented Computing and Applications, 1(1):65--73, 2007.
[8]
C. Boutilier, T. Dean, and S. Hanks. Decision theoretic planning: Structural assumptions and computational leverage. J. Artificial Intelligence Research, 11:1--94, 1999.
[9]
G. Canfora, M. Di Penta, R. Esposito, and M. Villani. A framework for QoS-aware binding and re-binding of composite web services. J. Syst. Softw., 81, 2008.
[10]
V. Cardellini, E. Casalicchio, V. Grassi, and F. Lo Presti. Flow-based service selection for web service composition supporting multiple QoS classes. In Proc. 2007 IEEE Int'l Conf. on Web Services, ICWS '07, pages 743--750, 2007.
[11]
K. Chen, J. Xu, and S. Reiff-Marganiec. Markov-HTN planning approach to enhance flexibility of automatic web service composition. In Proc. 2009 IEEE Int'l Conf. on Web Services, ICWS '09, pages 9--16, 2009.
[12]
M. Chen and R. Feldman. Optimal replacement policies with minimal repair and age-dependent costs. Eur. J. Oper. Res., 98(1):75--84, 1997.
[13]
C. Comaniciu and H. Poor. Jointly optimal power and admission control for delay sensitive traffic in CDMA networks with LMMSE receivers. IEEE Trans. Signal Process., 51(8):2031--2042, 2003.
[14]
R. Dearden, N. Friedman, and D. Andre. Model based Bayesian exploration. In Proc. 15th Conf. on Uncertainty in Artificial Intelligence, UAI '99, pages 150--159, 1999.
[15]
T. Dohi, T. Danjou, and H. Okamura. Optimal software rejuvenation policy with discounting. In Proc. 2001 Pacific Rim Int'l Symp. on Dependable Computing, 2001.
[16]
P. Doshi, R. Goodwin, R. Akkiraju, and K. Verma. Dynamic workflow composition: using Markov decision processes. Int'l J. Web Service Res., 2(1), 2005.
[17]
Y. Engel, S. Mannor, and R. Meir. Bayes meets Bellman: The Gaussian process approach to temporal difference learning. In Proc. 20th Int'l Conf. on Machine Learning, ICML '03, pages 154--161, 2003.
[18]
H. Eto and T. Dohi. Determining the optimal software rejuvenation schedule via semi-Markov decision process. J. of Computer Science, 2, 2006.
[19]
Q. Hu and W. Yue. Optimal replacement of a system according to a semi-Markov decision process in a semi-Markov environment. Optimization Methods and Software, 18:181--196, 2003.
[20]
Y. Huang, M. Kintala, N. Kolettis, and N. Fulton. Software rejuvenation: Analysis, module and applications. In Proc. 25th Int'l Symp. on Fault-Tolerant Computing, 1995.
[21]
M. Kearns, Y. Mansour, and A. Y. Ng. A sparse sampling algorithm for near-optimal planning in large Markov decision processes. Machine Learning, 49:193--208, 2002.
[22]
Mesquite Software. CSIM 20 user's guide. http://www.mesquite.com/.
[23]
M. Mundhenk, J. Goldsmith, C. Lusena, and E. Allender. Complexity of finite-horizon Markov decision process problems. J. ACM, 47:681--720, July 2000.
[24]
J. Ni, D. H. K. Tsang, S. Tatikonda, and B. Bensaou. Optimal and structured call admission control policies for resource-sharing systems. IEEE Trans. Commun., 55, 2007.
[25]
OASIS. Web Services Business Process Execution Language Version 2.0, Jan. 2007.
[26]
L. Péret and F. Garcia. On-line search for solving markov decision processes via heuristic sampling. In Proc. 16th European Conf. on Artificial Intelligence, ECAI '04, 2004.
[27]
A. Pfening, S. Garg, A. Puliafito, M. Telek, and K. S. Trivedi. Optimal software rejuvenation for tolerating soft failures. Perform. Eval., 27-28:491--506, Oct. 1996.
[28]
S. Pillai and N. Narendra. Optimal replacement policy of services based on Markov decision process. In Proc. 2009 IEEE Int'l Conf. on Services Computing, SCC '09, 2009.
[29]
M. L. Puterman. Markov Decision Processes: Discrete Stochastic, Dynamic Programming. Wiley, 1994.
[30]
M. Strens. A Bayesian framework for reinforcement learning. In Proc. 17th Int'l Conf. on Machine Learning, ICML '00, pages 943--950, 2000.
[31]
R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[32]
H. Wang, X. Zhou, W. Liu, W. Li, and A. Bouguettaya. Adaptive service composition based on reinforcement learning. In Proc. 8th Int'l Conf. on Service Oriented Computing, ICSOC '10, 2010.
[33]
C. Wu and D. Bertsekas. Admission control for wireless networks. Technical Report LIDS-P- 2466, Lab. for Information and Decision Systems, MIT, 1999.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM SIGMETRICS Performance Evaluation Review
ACM SIGMETRICS Performance Evaluation Review  Volume 39, Issue 4
March 2012
134 pages
ISSN:0163-5999
DOI:10.1145/2185395
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 April 2012
Published in SIGMETRICS Volume 39, Issue 4

Check for updates

Author Tags

  1. Markov decision process
  2. admission control
  3. quality of service
  4. service oriented architecture

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 31 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media