skip to main content
10.1145/379539acmconferencesBook PagePublication PagesppoppConference Proceedingsconference-collections
PPoPP '01: Proceedings of the eighth ACM SIGPLAN symposium on Principles and practices of parallel programming
ACM2001 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
PPoPP01: ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming Snowbird Utah USA
ISBN:
978-1-58113-346-2
Published:
18 June 2001
Sponsors:
Recommend ACM DL
ALREADY A SUBSCRIBER?SIGN IN

Reflects downloads up to 14 Jan 2025Bibliometrics
Abstract

No abstract available.

Skip Table Of Content Section
Article
Reference idempotency analysis: a framework for optimizing speculative execution

Recent proposals for multithreaded architectures allow threads with unknown dependences to execute speculatively in parallel. These architectures use hardware speculative storage to buffer uncertain data, track data dependences and roll back incorrect ...

Article
Pointer and escape analysis for multithreaded programs

This paper presents a new combined pointer and escape analysis for multithreaded programs. The algorithm uses a new abstraction called parallel interaction graphs to analyze the interactions between threads and extract precise points-to, escape, and ...

Article
Language support for Morton-order matrices

The uniform representation of 2-dimensional arrays serially in Morton order (or {\eee} order) supports both their iterative scan with cartesian indices and their divide-and-conquer manipulation as quaternary trees. This data structure is important ...

Article
Efficient load balancing for wide-area divide-and-conquer applications

Divide-and-conquer programs are easily parallelized by letting the programmer annotate potential parallelism in the form of spawn and sync constructs. To achieve efficient program execution, the generated work load has to be balanced evenly among the ...

Article
Scalable queue-based spin locks with timeout

Queue-based spin locks allow programs with busy-wait synchronization to scale to very large multiprocessors, without fear of starvation or performance-destroying contention. So-called try locks, traditionally based on non-scalable test-and-set locks, ...

Article
Contention elimination by replication of sequential sections in distributed shared memory programs

In shared memory programs contention often occurs at the transition between a sequential and a parallel section of the code. As all threads start executing the parallel section, they often access data just modified by the thread that executed the ...

Article
Accurate data redistribution cost estimation in software distributed shared memory systems

Distributing data is one of the key problems in implementing efficient distributed-memory parallel programs. The problem becomes more difficult in programs where data redistribution between computational phases is considered. The global data ...

Article
Dynamic adaptation to available resources for parallel computing in an autonomous network of workstations

Networks of workstations (NOWs), which are generally composed of autonomous compute elements networked together, are an attractive parallel computing platform since they offer high performance at low cost. The autonomous nature of the environment, ...

Article
Source-level global optimizations for fine-grain distributed shared memory systems

This paper describes and evaluates the use of aggressive static analysis in Jackal, a fine-grain Distributed Shared Memory (DSM) system for Java. Jackal uses an optimizing, source-level compiler rather than the binary rewriting techniques employed by ...

Article
High-level adaptive program optimization with ADAPT

Compile-time optimization is often limited by a lack of target machine and input data set knowledge. Without this information, compilers may be forced to make conservative assumptions to preserve correctness and to avoid performance degradation. In ...

Article
Blocking and array contraction across arbitrarily nested loops using affine partitioning

Applicable to arbitrary sequences and nests of loops, affine partitioning is a program transformation framework that unifies many previously proposed loop transformations, including unimodular transforms, fusion, fission, reindexing, scaling and ...

Article
Efficiency vs. portability in cluster-based network servers

Efficiency and portability are conflicting objectives for cluster-based network servers that distribute the clients' requests across the cluster based on the actual content requested. Our work is based on the observation that this efficiency vs. ...

Article
Statistical scalability analysis of communication operations in distributed applications

Current trends in high performance computing suggest that users will soon have widespread access to clusters of multiprocessors with hundreds, if not thousands, of processors. This unprecedented degree of parallelism will undoubtedly expose scalability ...

Article
LogGPS: a parallel computational model for synchronization analysis

We present a new parallel computational model, named LogGPS, which captures synchronization.

The LogGPS model is an extension of the LogGP model, which abstracts communication on parallel platforms. Although the LogGP model captures long messages with ...

Article
Peer to peer and distributed computing
Contributors
  • Oregon Health & Science University
  • University of Washington

Index Terms

  1. Proceedings of the eighth ACM SIGPLAN symposium on Principles and practices of parallel programming

      Recommendations

      Acceptance Rates

      Overall Acceptance Rate 230 of 1,014 submissions, 23%
      YearSubmittedAcceptedRate
      PPoPP '211503121%
      PPoPP '201212823%
      PPoPP '191522919%
      PPoPP '171322922%
      PPoPP '141842815%
      PPoPP '07652234%
      PPoPP '03452044%
      PPoPP '99791722%
      PPOPP '97862630%
      Overall1,01423023%