skip to main content
research-article

Online parallel scheduling of non-uniform tasks

Published: 26 July 2015 Publication History

Abstract

Consider a system in which tasks of different execution times arrive continuously and have to be executed by a set of machines that are prone to crashes and restarts. In this paper we model and study the impact of parallelism and failures on the competitiveness of such an online system. In a fault-free environment, a simple Longest-In-System scheduling policy, enhanced by a redundancy-avoidance mechanism, guarantees optimality in a long-term execution. In the presence of failures though, scheduling becomes a much more challenging task. In particular, no parallel deterministic algorithm can be competitive against an off-line optimal solution, even with one single machine and tasks of only two different execution times. We find that when additional energy is provided to the system in the form of processing speedup, the situation changes. Specifically, we identify thresholds on the speedup under which such competitiveness cannot be achieved by any deterministic algorithm, and above which competitive algorithms exist. Finally, we propose algorithms that achieve small bounded competitive ratios when the speedup is over the threshold.

References

[1]
Enhanced Intel speedstep technology for the Intel Pentium M processor, Intel White Paper 301170-001, 2004.
[2]
M. Ajtai, J. Aspnes, C. Dwork, O. Waarts, A theory of competitive analysis for distributed algorithms, in: Proceedings of the 35th IEEE Symposium on Foundations of Computer Science, 1994, pp. 401-411.
[3]
S. Albers, A. Antoniadis, Race to idle: new algorithms for speed scaling with a sleep state, in: Proceedings of the 23rd ACM-SIAM Symposium on Discrete Algorithms, 2012, pp. 1266-1285.
[4]
S. Albers, A. Antoniadis, G. Greiner, On multi-processor speed scaling with migration: extended abstract, in: Proceedings of the 23rd ACM Symposium on Parallelism in Algorithms and Architectures, 2011, pp. 279-288.
[5]
D. Alistarh, M.A. Bender, S. Gilbert, R. Guerraoui, How to allocate tasks asynchronously, in: Proceeding of the 53rd IEEE Symposium on Foundations of Computer Science, 2012, pp. 331-340.
[6]
S. Anand, N. Garg, N. Megow, Meeting deadlines: how much speed suffices?, in: Proceedings of the 38th International Colloquium on Automata, Languages and Programming, 2011, pp. 232-243.
[7]
R. Anderson, H. Woll, Algorithms for the certified write-all problem, SIAM J. Comput., 26 (1997) 1277-1283.
[8]
B. Awerbuch, S. Kutten, D. Peleg, Competitive distributed job scheduling (extended abstract), in: Proceedings of the 24th ACM Symposium on Theory of Computing, 1992, pp. 571-580.
[9]
N. Bansal, H.L. Chan, K. Pruhs, Speed scaling with an arbitrary power function, in: Proceedings of the 20th ACM-SIAM Symposium on Discrete Algorithms, 2009, pp. 693-701.
[10]
H.L. Chan, J. Edmonds, K. Pruhs, Speed scaling of processes with arbitrary speedup curves on a multiprocessor, in: Proceedings of the 21st ACM Symposium on Parallelism in Algorithms and Architectures, 2009, pp. 1-10.
[11]
B.S. Chlebus, R. De Prisco, A.A. Shvartsman, Performing tasks on synchronous restartable message-passing processors, Distrib. Comput., 14 (2001) 49-64.
[12]
G. Cordasco, G. Malewicz, A.L. Rosenberg, Advances in IC-scheduling theory: scheduling expansive and reductive DAGs and scheduling DAGs via duality, IEEE Trans. Parallel Distrib. Syst., 18 (2007) 1607-1617.
[13]
J. Dias, E. Ogasawara, D. de Oliveira, E. Pacitti, M. Mattoso, Improving many-task computing in scientific workflows using p2p techniques, in: Proceedings of the 3rd IEEE Workshop on Many-Task Computing on Grids and Supercomputers, 2010.
[14]
Y. Emek, M.M. Halldórsson, Y. Mansour, B. Patt-Shamir, J. Radhakrishnan, D. Rawitz, Online set packing and competitive scheduling of multi-part tasks, in: Proceedings of the 29th ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, 2010, pp. 440-449.
[15]
Enabling Grids for E-sciencE (EGEE). http://www.eu-egee.org
[16]
Ch. Georgiou, D.R. Kowalski, Performing dynamically injected tasks on processes prone to crashes and restarts, in: Proceedings of the 25th International Symposium on Distributed Computing, 2011, pp. 165-180.
[17]
Ch. Georgiou, A. Russell, A.A. Shvartsman, The complexity of synchronous iterative Do-All with crashes, Distrib. Comput., 17 (2004) 47-63.
[18]
Ch. Georgiou, A.A. Shvartsman, Do-All Computing in Distributed Systems: Cooperation in the Presence of Adversity, Springer, 2008.
[19]
Ch. Georgiou, A.A. Shvartsman, Cooperative Task-Oriented Computing: Algorithms and Complexity, Morgan & Claypool Publishers, 2011.
[20]
G. Greiner, T. Nonner, A. Souza, The bell is ringing in speed-scaled multiprocessor scheduling, in: Proceedings of the 21st ACM Symposium on Parallelism in Algorithms and Architectures, 2009, pp. 11-18.
[21]
K.S. Hong, J.Y.-T. Leung, On-line scheduling of real-time tasks, IEEE Trans. Comput., 41 (1992) 1326-1331.
[22]
K. Jeffay, D.F. Stanat, C.U. Martel, On non-preemptive scheduling of period and sporadic tasks, in: Twelfth Real-Time Systems Symposium 1991, Proceedings, 1991, pp. 129-139.
[23]
B. Joan, E. Faith, Bounds for scheduling jobs on grid processors, in: Lecture Notes in Computer Science, vol. 8066, Springer, Berlin, Heidelberg, 2013, pp. 12-26.
[24]
P.C. Kanellakis, A.A. Shvartsman, Fault-Tolerant Parallel Computation, Kluwer Academic Publishers, Norwell, MA, USA, 1997.
[25]
S. Kentros, A. Kiayias, N.C. Nicolaou, A.A. Shvartsman, At-most-once semantics in asynchronous shared memory, in: Proceedings of the 23rd International Symposium on Distributed Computing, 2009, pp. 258-273.
[26]
E. Korpela, D. Werthimer, D. Anderson, J. Cobb, M. Leboisky, SETI@home-massively distributed computing for SETI, Comput. Sci. Eng., 3 (2001) 78-83.
[27]
P. Lalanda, Shared repository pattern, in: Proceedings of the 5th Pattern Languages of Programs Conference, 1998.
[28]
C.A. Phillips, C. Stein, E. Torng, J. Wein, Optimal time-critical scheduling via resource augmentation (extended abstract), in: Proceedings of the 29th ACM Symposium on Theory of Computing, 1997, pp. 140-149.
[29]
M.L. Pinedo, Scheduling: Theory, Algorithms, and Systems, Springer, 2012.
[30]
K. Schwan, H. Zhou, Dynamic scheduling of hard real-time tasks and real-time threads, IEEE Trans. Softw. Eng., 18 (1992) 736-748.
[31]
D. Sleator, R.E. Tarjan, Amortized efficiency of list update and paging rules, Commun. ACM, 28 (1985) 202-208.
[32]
U. van Heesch, S. Mahdavi Hezavehi, P. Avgeriou, Combining architectural patterns and software technologies in one design language, in: Proceedings of the 16th European Pattern Languages of Programming, 2011.
[33]
A. Wierman, L.L.H. Andrew, A. Tang, Power-aware speed scaling in processor sharing systems, in: Proceedings of the IEEE INFOCOM 2009, 2009, pp. 2007-2015.
[34]
F. Yao, A. Demers, A. Shenker, A scheduling model for reduced CPU energy, in: Proceedings of the 36th IEEE Symposium on Foundations of Computer Science, 1995, pp. 374-382.
[35]
M. Yokokawa, F. Shoji, A. Uno, M. Kurokawa, T. Watanabe, The K computer: Japanese next-generation supercomputer development project, in: Proceedings of the 2011 International Symposium on Low Power Electronics and Design, 2011, pp. 371-372.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Theoretical Computer Science
Theoretical Computer Science  Volume 590, Issue C
July 2015
152 pages

Publisher

Elsevier Science Publishers Ltd.

United Kingdom

Publication History

Published: 26 July 2015

Author Tags

  1. Competitiveness
  2. Energy efficiency
  3. Failures
  4. Non-uniform tasks
  5. Online algorithms
  6. Scheduling

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 16 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media