LAPACK

{{Short description|Software library for numerical linear algebra}}

{{Infobox software

| name = LAPACK (Netlib reference implementation)

| logo = LAPACK logo.svg

| logo size = 120px

| screenshot =

| caption =

| collapsible =

| author =

| developer =

| released = {{Start date and age|1992}}

| latest release version = {{Wikidata|property|reference|edit|P348}}

| latest release date = {{Start date and age|{{wikidata|qualifier|P348|P577}}|df=yes}}

| latest preview version =

| latest preview date =

| programming language = Fortran 90

| operating system =

| platform =

| size =

| language =

| genre = Software library

| license = BSD-new

| website =

}}

LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition.{{cite book|last1=Anderson|first1=E.|last2=Bai|first2=Z.|last3=Bischof|first3=C.|last4=Blackford|first4=S.|last5=Demmel|first5=J.|author-link5=James Demmel|last6=Dongarra|first6=J.|author-link6=Jack Dongarra|last7=Du Croz|first7=J.|last8=Greenbaum|first8=A.|author-link8=Anne Greenbaum|last9=Hammarling|first9=S.|last10=McKenney|first10=A.|last11=Sorensen|first11=D.

| title =LAPACK Users' Guide

| edition = Third

| publisher = Society for Industrial and Applied Mathematics

| year = 1999

| location = Philadelphia, PA

| isbn = 0-89871-447-8

| url = https://www.netlib.org/lapack/lug/

| access-date=28 May 2022

}} LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008).{{cite web|url=https://www.netlib.org/lapack/lapack-3.2.html|title=LAPACK 3.2 Release Notes|date=16 November 2008}} The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines.{{rp|at=[https://www.netlib.org/lapack/lug/node65.html "The BLAS as the Key to Portability"]}}

LAPACK was designed as the successor to the linear equations and linear least-squares routines of LINPACK and the eigenvalue routines of EISPACK. LINPACK, written in the 1970s and 1980s, was designed to run on the then-modern vector computers with shared memory. LAPACK, in contrast, was designed to effectively exploit the caches on modern cache-based architectures and the instruction-level parallelism of modern superscalar processors,{{rp|at=[https://www.netlib.org/lapack/lug/node61.html "Factors that Affect Performance"]}} and thus can run orders of magnitude faster than LINPACK on such machines, given a well-tuned BLAS implementation.{{rp|at=[https://www.netlib.org/lapack/lug/node65.html "The BLAS as the Key to Portability"]}} LAPACK has also been extended to run on distributed memory systems in later packages such as ScaLAPACK and PLAPACK.{{cite web|access-date=20 April 2017| date=12 June 2007| title=PLAPACK: Parallel Linear Algebra Package| url=https://www.cs.utexas.edu/users/plapack/| website=www.cs.utexas.edu| publisher=University of Texas at Austin}}

Netlib LAPACK is licensed under a three-clause BSD style license, a permissive free software license with few restrictions.{{cite web |title=LICENSE.txt |url=https://www.netlib.org/lapack/LICENSE.txt |website=Netlib |access-date=28 May 2022}}

Naming scheme

Subroutines in LAPACK have a naming convention which makes the identifiers very compact. This was necessary as the first Fortran standards only supported identifiers up to six characters long, so the names had to be shortened to fit into this limit.{{rp|at=[https://www.netlib.org/lapack/lug/node24.html "Naming Scheme"]}}

A LAPACK subroutine name is in the form pmmaaa, where:

  • p is a one-letter code denoting the type of numerical constants used. S, D stand for real floating-point arithmetic respectively in single and double precision, while C and Z stand for complex arithmetic with respectively single and double precision. The newer version, LAPACK95, uses generic subroutines in order to overcome the need to explicitly specify the data type.
  • mm is a two-letter code denoting the kind of matrix expected by the algorithm. The codes for the different kind of matrices are reported below; the actual data are stored in a different format depending on the specific kind; e.g., when the code DI is given, the subroutine expects a vector of length n containing the elements on the diagonal, while when the code GE is given, the subroutine expects an {{math|n×n}} array containing the entries of the matrix.
  • aaa is a one- to three-letter code describing the actual algorithm implemented in the subroutine, e.g. SV denotes a subroutine to solve linear system, while R denotes a rank-1 update.

For example, the subroutine to solve a linear system with a general (non-structured) matrix using real double-precision arithmetic is called DGESV.{{Rp|at=[https://www.netlib.org/lapack/lug/node26.html "Linear Equations"]}}

class="wikitable"

|+ Matrix types in the LAPACK naming scheme

Name

! Description

BD

| bidiagonal matrix

DI

| diagonal matrix

GB

| general band matrix

GE

| general matrix (i.e., unsymmetric, in some cases rectangular)

GG

| general matrices, generalized problem (i.e., a pair of general matrices)

GT

| general tridiagonal matrix

HB

| (complex) Hermitian band matrix

HE

| (complex) Hermitian matrix

HG

| upper Hessenberg matrix, generalized problem (i.e. a Hessenberg and a triangular matrix)

HP

| (complex) Hermitian, packed storage matrix

HS

| upper Hessenberg matrix

OP

| (real) orthogonal matrix, packed storage matrix

OR

| (real) orthogonal matrix

PB

| symmetric matrix or Hermitian matrix positive definite band

PO

| symmetric matrix or Hermitian matrix positive definite

PP

| symmetric matrix or Hermitian matrix positive definite, packed storage matrix

PT

| symmetric matrix or Hermitian matrix positive definite tridiagonal matrix

SB

| (real) symmetric band matrix

SP

| symmetric, packed storage matrix

ST

| (real) symmetric matrix tridiagonal matrix

SY

| symmetric matrix

TB

| triangular band matrix

TG

| triangular matrices, generalized problem (i.e., a pair of triangular matrices)

TP

| triangular, packed storage matrix

TR

| triangular matrix (or in some cases quasi-triangular)

TZ

| trapezoidal matrix

UN

| (complex) unitary matrix

UP

| (complex) unitary, packed storage matrix

Use with other programming languages and libraries

Many programming environments today support the use of libraries with C binding (LAPACKE, a standardised C interface,{{cite web |title=The LAPACKE C Interface to LAPACK |url=https://netlib.org/lapack/lapacke.html |website=LAPACK — Linear Algebra PACKage |access-date=2024-09-22 }} has been part of LAPACK since version 3.4.0{{cite web |title=LAPACK 3.4.0 |url=https://netlib.org/lapack/lapack-3.4.0.html |website=LAPACK — Linear Algebra PACKage |access-date=2024-09-22 }}), allowing LAPACK routines to be used directly so long as a few restrictions are observed. Additionally, many other software libraries and tools for scientific and numerical computing are built on top of LAPACK, such as R,{{Cite web |title=R: LAPACK Library |url=https://stat.ethz.ch/R-manual/R-patched/library/base/html/La_library.html |access-date=2022-03-19 |website=stat.ethz.ch }} MATLAB,{{cite web |title=LAPACK in MATLAB |url=https://www.mathworks.com/help/matlab/math/lapack-in-matlab.html |website=Mathworks Help Center |access-date=28 May 2022 }} and SciPy.{{cite web |title=Low-level LAPACK functions |url=https://docs.scipy.org/doc/scipy/reference/linalg.lapack.html |website=SciPy v1.8.1 Manual |access-date=28 May 2022 }}

Several alternative language bindings are also available:

  • Armadillo for C++
  • IT++ for C++
  • LAPACK++ for C++
  • Lacaml for OCaml
  • SciPy for Python
  • Gonum for Go
  • [https://metacpan.org/pod/PDL::LinearAlgebra PDL::LinearAlgebra] for Perl Data Language
  • [https://metacpan.org/pod/Math::Lapack Math::Lapack] for Perl
  • [https://github.com/2xmax/NLapack NLapack] for .NET
  • [https://github.com/DanielMartensson/CControl CControl] for C in embedded systems
  • [https://docs.rs/lapack/latest/lapack/ lapack] for rust

Implementations

As with BLAS, LAPACK is sometimes forked or rewritten to provide better performance on specific systems. Some of the implementations are:

; Accelerate: Apple's framework for macOS and iOS, which includes tuned versions of BLAS and LAPACK.{{Cite web|url=https://developer.apple.com/library/mac/#releasenotes/Performance/RN-vecLib/|title=Guides and Sample Code|website=developer.apple.com|access-date=2017-07-07}}{{Cite web|url=https://developer.apple.com/library/ios/#documentation/Accelerate/Reference/AccelerateFWRef/|title=Guides and Sample Code|website=developer.apple.com|access-date=2017-07-07}}

; Netlib LAPACK: The official LAPACK.

; Netlib ScaLAPACK: Scalable (multicore) LAPACK, built on top of PBLAS.

; Intel MKL: Intel's Math routines for their x86 CPUs.

; OpenBLAS: Open-source reimplementation of BLAS and LAPACK.

; Gonum LAPACK: A partial native Go implementation.

Since LAPACK typically calls underlying BLAS routines to perform the bulk of its computations, simply linking to a better-tuned BLAS implementation can be enough to significantly improve performance. As a result, LAPACK is not reimplemented as often as BLAS is.

= Similar projects =

These projects provide a similar functionality to LAPACK, but with a main interface differing from that of LAPACK:

; Libflame: A dense linear algebra library. Has a LAPACK-compatible wrapper. Can be used with any BLAS, although BLIS is the preferred implementation.{{cite web |title=amd/libflame: High-performance object-based library for DLA computations |url=https://github.com/amd/libflame |website=GitHub |publisher=AMD |date=25 August 2020}}

; Eigen: A header library for linear algebra. Has a BLAS and a partial LAPACK implementation for compatibility.

; MAGMA: Matrix Algebra on GPU and Multicore Architectures (MAGMA) project develops a dense linear algebra library similar to LAPACK but for heterogeneous and hybrid architectures including multicore systems accelerated with GPGPUs.

; PLASMA: The Parallel Linear Algebra for Scalable Multi-core Architectures (PLASMA) project is a modern replacement of LAPACK for multi-core architectures. PLASMA is a software framework for development of asynchronous operations and features out of order scheduling with a runtime scheduler called QUARK that may be used for any code that expresses its dependencies with a directed acyclic graph.{{Cite web |url=http://icl.eecs.utk.edu/ |title=ICL |website=icl.eecs.utk.edu |access-date=2017-07-07 }}

{{See also|Basic Linear Algebra Subprograms#Similar libraries (not compatible with BLAS)}}

See also

{{Portal|Free and open-source software}}

References