Please note that JavaScript and style sheet are used in this website,
Due to unadaptability of the style sheet with the browser used in your computer, pages may not look as original.
Even in such a case, however, the contents can be used safely.
User's Guide: ScaLAPACK & BLACSIntroductionScaLAPACK (Scalable Linear Algebra PACKage) is a library of highperformance linear algebra routines for distributedmemory message passing computers. ScaLAPACK has routines for systems of linear equations, linear least squares problems, eigenvalue calculation, and singular value decomposition. ScaLAPACK can also handle many associated computations such as matrix factorization or estimating condition numbers. Dense and band matrices are supported, but not general sparse matrices. Similar functionality is provided for both real and complex matrices. As in LAPACK, the ScaLAPACK routines are based on blockpartitioned algorithms, in order to minimize data movement. The fundamental building block of the ScaLAPACK library is a distributed memory version of the Level 1, 2, and 3 BLAS, called PBLAS (Parallel BLAS). The PBLAS are, in turn, built on the BLAS for computation on a single node, and on BLACS for communication across nodes. PBLAS is an integral part of the ScaLAPACK library. BLACS (Basic Linear Algebra Communication Subprograms) are a messagepassing library designed for linear algebra. The computational model consists of a one or twodimensional process grid, where each process stores pieces of the matrices and vectors. The BLACS include synchronous send/receive routines to communicate a matrix or submatrix from one process to another, to broadcast submatrices to many processes, or to compute global data reductions (sums, maxima and minima). There are also routines to construct, change, or query the process grid. Since several ScaLAPACK algorithms require broadcasts or reductions among different subsets of processes, the BLACS permit a process to be a member of several overlapping or disjoint process grids, each one labeled by a context. In MPI this is called a communicator. The BLACS provide facilities for safe interoperation of system contexts and BLACS contexts. User InterfaceUser interface information is available from several sources:
ScaLAPACK Routine ListSimple Driver and Divide and Conquer Driver Routines
Expert Driver and RRR Driver Routines
Computational Routines
PBLAS Routine List
BLACS Routine List
Further Reading
