skip to main content
article
Free access

Using the run-time sizes of data structures to guide parallel-thread creation

Published: 01 July 1994 Publication History

Abstract

Dynamic granularity estimation is a new technique for automatically identifying expressions in functional languages for parallel evaluation. Expressions with little computation relative to thread-creation costs should evaluate sequentially for maximum performance. Static identification of such threads is however difficult. Therefore, dynamic granularity estimation has compile-time and run-time components: Abstract interpretation statically identifies functions whose complexity depends on data structure sizes; the run-time system maintains approximations to these sizes. Compiler-inserted checks consult this size information to make thread creation decisions dynamically.
We describe dynamic granularity estimation for a list-based functional language. Extension to general recursive data structures and imperative operations is possible. Performance measurements of dynamic granularity estimation in a parallel ML implementation on a shared-memory machine demonstrate the possibility of large reductions (>20%) in execution time.

References

[1]
S. Abramsky and C. L. Hanldn, editors. Abstract Interpretation of Declarative Languages. Ellis Horwood Ltd., Chichester, West Sussex, England, 1987.
[2]
A. W. Appel and D. B. MacQueen. A Standard ML compiler. Functional Programming Languages and Computer Architecture, 274:301-324, 1987.
[3]
C. J. Cheney. A nonrecursive list compacting algorithm. Communications of the ACM, 13(11):677- 678, November 1970.
[4]
E. C. Cooper and J. G. Morrisett. Adding threads to Standard ML. Technical Report CMU-CS-90- 186, School of' Computer Science, Carnegie Mellon University, December 1990.
[5]
P. Couaot and R. Couaot. Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Symposium on Principles of Programruing Languages, pages 238-252. Association for Computing Machinery, 1977.
[6]
S. K. Debray, N.-W. Lin, and M. Hermenegildo. Task granularity analysis in logic programs. In Conference on Programming Language Design and Implementation, pages 174-188, June 1990.
[7]
V. Dornic, P. Jouvelot, and D. K. Gifford. Polymorphic time systems for estimating program complexity. A CM Letters on Programming Languages and Systems, 1(1):33-45, March 1992.
[8]
R. R. Fenichel and J. C. Yochelson. A Lisp garbage-collector for virtual memory computer systerns. Communications of the ACM, 12(11):611- 612, November 1969.
[9]
C. N. Fischer. Crafting a Compiler. Benjamin- Cummings, 1988.
[10]
R.P. Gabriel and J. McCarthy. Queue-based multiprocessing Lisp. In Lisp and Functional Programruing, pages 25-44. Association for Computing Machlnery, August 1984.
[11]
R. Goldman and R. P. Gabriel. Qlisp: Experience and new directions. In Proceedings of ACM/SIGPLAN PPEAL5 1988 (Parallel Programming: E~perience with Applications, Languages and Systems), pages 111-123, July 1988.
[12]
S. L. Gray. Using futures to exploit parallelism in Lisp. Master's thesis, MIT, February 1986.
[13]
R. H. Halstead, Jr. Multilisp: A language for concurrent symbolic computation. A CM Transactions on Programming Languages and Systems, 7(4):501- 538, 1985.
[14]
R. H. Halstead, Jr. An assessment of Multilisp: Lessons from experience. International Journal of Parallel Programming, 15(6):459-501, 1986.
[15]
W. L. Harrison and D. A. Padua. PARCEL: Project for the automatic restructuring and concurrent evaluation of lisp. In International Conference on $upercomputing, pages 527-538, July 1988.
[16]
L. Huelsbergen. Dynamic Language ParaUelization. PhD thesis, University of Wisconsin-Madison, August 1993.
[17]
L. Huelsbergen and J. R. Larus. Dynamic program paraileliT, ation. In Lisp and Functional Programruing, pages 311-323. Association for Computing Machinery, June 1992.
[18]
L. S. Hunt. Abstract interpretation of Functional Languages: From Theory to Pract/ce. PhD thesis, Department of Computing, Imperial College of Science, Technology and Medicine, University of London, 1991.
[19]
D. Le M~tayer. ACE: An automatic complexity evaluator. A CM Transactions on Programming Languages and Systems, 10(2):248-266, April 1988.
[20]
E. Mohr. Dynamic Partitioning of Parallel Lisp Programs. PhD thesis, Yale University, August 1991.
[21]
E. Mohr, D. Kranz, and R. H. Halstead, Jr. Lazy task creation: A technique for increasing the granularity of parallel programs. In Lisp and Functional Programming, pages 185-197. Association for Computing Machinery, June 1990.
[22]
J. G. Morrisett and A. Tolmach. Procs and locks: A portable multiprocessing platform for Standard ML of New Jersey. In Principles and Practice of Parallel Programming, pages 198-207. Association for Computing Machinery, May 1993.
[23]
J. D. Pehoushek and J. S. Weeping. Low-cost process creation and dynamic partioning in Qlisp. In US/Japan Workshop on Parallel Lisp, pages 183-199. Lecture Notes in Computer Science, June 1989.
[24]
G. D. Plotkin. Call-by-name, call-by-value, and the,X-calculus. Theoretical Computer Science, 1:125- 159, 1975.
[25]
B. Reistad and D. Gifford. Static dependent costs for estimating execution time. In Lisp and FUnetional Programming. Association for Computing Machinery, June 1994.
[26]
J. C. Reynolds. GEDANKEN--a simple typeless language based on the principle of completeness and the reference concept. Communications of the A CM, 13(5):308-319, 1970.
[27]
D. Sands. Complexity analysis for a lazy higherorder language. In ESOP, pages 361-376. Lecture Notes in Computer Science, May 1990.
[28]
D. Tarditi, A. Acharya, and P. Lee. No assembly required: Compiling Standard ML to C. Technical Report CMU-CS-90-187, School of Computer Science, Carnegie Mellon University, November 1990.
[29]
M. Torte. Operational Semantics and Polymor. phic Type inference. PhD thesis, University of Edinburgh, Department of Computer Science, May 1988.
[30]
M. T. Vandevoorde and E. S. Roberts. WorkCrews: An abstraction for controlling parallelism. Interns. tional Journal of Parallel Programming, 17(4):347- 366, 1988.
[31]
P. L. Wadler. Strictness analysis aids time analysis. In Symposium on Principles of Programming Languages, pages 119-132. Association for Computing Machinery, January 1988.
[32]
B. Wegbreit. Mechanical program analysis. Communicatiorn of the A CM, 18(9):528-539, Septembet 1975.

Cited By

View all

Recommendations

Reviews

David B. Skillicorn

The overhead of spawning a new task to compute some expression in a functional language is often high. It is therefore important to know that the amount of work that the spawned task will do is enough to cover the costs of its creation, initial setup, and destruction. The proposed solution uses a combination of abstract interpretation at compile time, to find those places where complexity depends on the size of data structures, and runtime approximation of the actual sizes of data structures as computation proceeds. Results using a functional language on a shared-memory parallel computer show execution time reductions as large as 20 percent. The actual data structures used are cons lists. I remain mystified by the continual appearance of a data structure so inherently sequential in functional languages intended for parallel execution. Cons lists are relatively easy to analyze, but they are not very useful for solving problems in parallel. As the authors point out, their idea extends to more useful data structures, such as join lists and trees, but it becomes much more complex. Its actual performance on richer data structures remains an open question. Surprisingly, the authors do not discuss the large body of European work on compilation of functional languages for parallelism. Many systems have actually been built, so their designers must have faced up to the issue of subtask creation. Indeed some, such as Flagship and Grip, have had to do it for distributed-memory targets, where the costs are correspondingly higher. A comparison of the present result with existing techniques would have strengthened the paper.

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Information & Contributors

Information

Published In

cover image ACM SIGPLAN Lisp Pointers
ACM SIGPLAN Lisp Pointers  Volume VII, Issue 3
July-Sept. 1994
327 pages
ISSN:1045-3563
DOI:10.1145/182590
Issue’s Table of Contents
  • cover image ACM Conferences
    LFP '94: Proceedings of the 1994 ACM conference on LISP and functional programming
    July 1994
    327 pages
    ISBN:0897916433
    DOI:10.1145/182409
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 July 1994
Published in SIGPLAN-LISPPOINTERS Volume VII, Issue 3

Check for updates

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)84
  • Downloads (Last 6 weeks)29
Reflects downloads up to 01 Feb 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media