We investigate the average-case speed and scalability of parallel algorithms executing on multiprocessors. Our performance metrics are average-speed and isospeed scalability. For the purpose of average-case performance prediction, we formally define the concepts of average-case average-speed and average-case isospeed scalability. By modeling parallel algorithms on multiprocessors using task precedence graphs, we are mainly interested in the effects of synchronization overhead and load imbalance on the performance of parallel computations. Thus, we focus on the structures of parallel computations, whose inherent sequential parts are limitations to high performance. Task execution times are treated as random variables, so that we can analyze the average-case performance of parallel computations. For several typical classes of task graphs, including iterative computations, search trees, partitioning algorithms, and diamond dags, we derive the growth rate of the number of tasks as well as isospeed scalability in keeping constant average-speed. In addition to analytical results, extensive numerical data are also demonstrated. An important discovery of our study is that while a parallel computation can be made scalable by increasing the problem size together with the system size, it is actually the amount of parallelism that should scale up with the system size. We also argue that under our probabilistic model of parallel algorithms and the list scheduling strategy, the number of tasks should grow at least at the rate of Θ(P log P), where P is the number of processors, so that a constant average-speed can be maintained. Furthermore, Θ(log P/ log P') is the highest isospeed scalability achievable.