World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

AVERAGE-CASE ANALYSIS OF ISOSPEED SCALABILITY OF PARALLEL COMPUTATIONS ON MULTIPROCESSORS

    https://doi.org/10.1142/S0129053300000072Cited by:1 (Source: Crossref)

    We investigate the average-case speed and scalability of parallel algorithms executing on multiprocessors. Our performance metrics are average-speed and isospeed scalability. For the purpose of average-case performance prediction, we formally define the concepts of average-case average-speed and average-case isospeed scalability. By modeling parallel algorithms on multiprocessors using task precedence graphs, we are mainly interested in the effects of synchronization overhead and load imbalance on the performance of parallel computations. Thus, we focus on the structures of parallel computations, whose inherent sequential parts are limitations to high performance. Task execution times are treated as random variables, so that we can analyze the average-case performance of parallel computations. For several typical classes of task graphs, including iterative computations, search trees, partitioning algorithms, and diamond dags, we derive the growth rate of the number of tasks as well as isospeed scalability in keeping constant average-speed. In addition to analytical results, extensive numerical data are also demonstrated. An important discovery of our study is that while a parallel computation can be made scalable by increasing the problem size together with the system size, it is actually the amount of parallelism that should scale up with the system size. We also argue that under our probabilistic model of parallel algorithms and the list scheduling strategy, the number of tasks should grow at least at the rate of Θ(P log P), where P is the number of processors, so that a constant average-speed can be maintained. Furthermore, Θ(log P/ log P') is the highest isospeed scalability achievable.