next up previous contents
Next: 4.5 Understanding the time report Up: 4 Performances Previous: 4.3 File space requirements   Contents


4.4 Parallelization issues

pw.x can run in principle on any number of processors. The effectiveness of parallelization is ultimately judged by the ''scaling'', i.e. how the time needed to perform a job scales with the number of processors, and depends upon:

Ideally one would like to have linear scaling, i.e. TT0/Np for Np processors, where T0 is the estimated time for serial execution. In addition, one would like to have linear scaling of the RAM per processor: ONO0/Np, so that large-memory systems fit into the RAM of each processor.

Parallelization on k-points:

Parallelization on PWs:

A note on scaling: optimal serial performances are achieved when the data are as much as possible kept into the cache. As a side effect, PW parallelization may yield superlinear (better than linear) scaling, thanks to the increase in serial speed coming from the reduction of data size (making it easier for the machine to keep data in the cache).

VERY IMPORTANT: For each system there is an optimal range of number of processors on which to run the job. A too large number of processors will yield performance degradation. If the size of pools is especially delicate: Np should not exceed N3 and Nr3, and should ideally be no larger than 1/2÷1/4N3 and/or Nr3. In order to increase scalability, it is often convenient to further subdivide a pool of processors into ''task groups''. When the number of processors exceeds the number of FFT planes, data can be redistributed to "task groups" so that each group can process several wavefunctions at the same time.

The optimal number of processors for "linear-algebra" parallelization, taking care of multiplication and diagonalization of M×M matrices, should be determined by observing the performances of cdiagh/rdiagh (pw.x) or ortho (cp.x) for different numbers of processors in the linear-algebra group (must be a square integer).

Actual parallel performances will also depend on the available software (MPI libraries) and on the available communication hardware. For PC clusters, OpenMPI (http://www.openmpi.org/) seems to yield better performances than other implementations (info by Kostantin Kudin). Note however that you need a decent communication hardware (at least Gigabit ethernet) in order to have acceptable performances with PW parallelization. Do not expect good scaling with cheap hardware: PW calculations are by no means an "embarrassing parallel" problem.

Also note that multiprocessor motherboards for Intel Pentium CPUs typically have just one memory bus for all processors. This dramatically slows down any code doing massive access to memory (as most codes in the QUANTUM ESPRESSO distribution do) that runs on processors of the same motherboard.


next up previous contents
Next: 4.5 Understanding the time report Up: 4 Performances Previous: 4.3 File space requirements   Contents