tridiagonal matrix

{{Short description|Matrix with nonzero elements on the main diagonal and the diagonals above and below it}}

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal:

:\begin{pmatrix}

1 & 4 & 0 & 0 \\

3 & 4 & 1 & 0 \\

0 & 2 & 3 & 4 \\

0 & 0 & 1 & 3 \\

\end{pmatrix}.

The determinant of a tridiagonal matrix is given by the continuant of its elements.{{cite book | author=Thomas Muir | author-link=Thomas Muir (mathematician) | title=A treatise on the theory of determinants | url=https://archive.org/details/treatiseontheory0000muir | url-access=registration | year=1960 | publisher=Dover Publications | pages=[https://archive.org/details/treatiseontheory0000muir/page/516 516–525] }}

An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm.

Properties

A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix.{{cite book|first1=Roger A.| last1=Horn | first2=Charles R. | last2=Johnson | title = Matrix Analysis | publisher= Cambridge University Press |year= 1985 |isbn = 0521386322 | page=28}} In particular, a tridiagonal matrix is a direct sum of p 1-by-1 and q 2-by-2 matrices such that {{nowrap|1=p + q/2 = n}} — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix A satisfies ak,k+1 ak+1,k > 0 for all k, so that the signs of its entries are symmetric, then it is similar to a Hermitian matrix, by a diagonal change of basis matrix. Hence, its eigenvalues are real. If we replace the strict inequality by ak,k+1 ak+1,k ≥ 0, then by continuity, the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix.Horn & Johnson, page 174

The set of all n × n tridiagonal matrices forms a 3n-2

dimensional vector space.

Many linear algebra algorithms require significantly less computational effort when applied to diagonal matrices, and this improvement often carries over to tridiagonal matrices as well.

=Determinant=

{{Main|Continuant (mathematics)}}

The determinant of a tridiagonal matrix A of order n can be computed from a three-term recurrence relation.{{Cite journal | last1 = El-Mikkawy | first1 = M. E. A. | title = On the inverse of a general tridiagonal matrix | doi = 10.1016/S0096-3003(03)00298-4 | journal = Applied Mathematics and Computation | volume = 150 | issue = 3 | pages = 669–679 | year = 2004 }} Write f1 = |a1| = a1 (i.e., f1 is the determinant of the 1 by 1 matrix consisting only of a1), and let

:f_n = \begin{vmatrix}

a_1 & b_1 \\

c_1 & a_2 & b_2 \\

& c_2 & \ddots & \ddots \\

& & \ddots & \ddots & b_{n-1} \\

& & & c_{n-1} & a_n

\end{vmatrix}.

The sequence (fi) is called the continuant and satisfies the recurrence relation

:f_n = a_n f_{n-1} - c_{n-1}b_{n-1}f_{n-2}

with initial values f0 = 1 and f−1 = 0. The cost of computing the determinant of a tridiagonal matrix using this formula is linear in n, while the cost is cubic for a general matrix.

=Inversion=

The inverse of a non-singular tridiagonal matrix T

:T = \begin{pmatrix}

a_1 & b_1 \\

c_1 & a_2 & b_2 \\

& c_2 & \ddots & \ddots \\

& & \ddots & \ddots & b_{n-1} \\

& & & c_{n-1} & a_n

\end{pmatrix}

is given by

:(T^{-1})_{ij} = \begin{cases}

(-1)^{i+j}b_i \cdots b_{j-1} \theta_{i-1} \phi_{j+1}/\theta_n & \text{ if } i < j\\

\theta_{i-1} \phi_{j+1}/\theta_n & \text{ if } i = j\\

(-1)^{i+j}c_j \cdots c_{i-1} \theta_{j-1} \phi_{i+1}/\theta_n & \text{ if } i > j\\

\end{cases}

where the θi satisfy the recurrence relation

:\theta_i = a_i \theta_{i-1} - b_{i-1}c_{i-1}\theta_{i-2} \qquad i=2,3,\ldots,n

with initial conditions θ0 = 1, θ1 = a1 and the ϕi satisfy

:\phi_i = a_i \phi_{i+1} - b_i c_i \phi_{i+2} \qquad i=n-1,\ldots,1

with initial conditions ϕn+1 = 1 and ϕn = an.{{Cite journal | last1 = Da Fonseca | first1 = C. M. | doi = 10.1016/j.cam.2005.08.047 | title = On the eigenvalues of some tridiagonal matrices | journal = Journal of Computational and Applied Mathematics | volume = 200 | pages = 283–286 | year = 2007 | doi-access = free }}{{Cite journal | last1 = Usmani | first1 = R. A. | doi = 10.1016/0024-3795(94)90414-6 | title = Inversion of a tridiagonal jacobi matrix | journal = Linear Algebra and Its Applications | volume = 212-213 | pages = 413–414 | year = 1994 | doi-access = free }}

Closed form solutions can be computed for special cases such as symmetric matrices with all diagonal and off-diagonal elements equal{{Cite journal | last1 = Hu | first1 = G. Y. | last2 = O'Connell | first2 = R. F. | doi = 10.1088/0305-4470/29/7/020 | title = Analytical inversion of symmetric tridiagonal matrices | journal = Journal of Physics A: Mathematical and General | volume = 29 | issue = 7 | pages = 1511 | year = 1996 | bibcode = 1996JPhA...29.1511H }} or Toeplitz matrices{{Cite journal | last1 = Huang | first1 = Y. | last2 = McColl | first2 = W. F. | doi = 10.1088/0305-4470/30/22/026 | title = Analytical inversion of general tridiagonal matrices | journal = Journal of Physics A: Mathematical and General | volume = 30 | issue = 22 | pages = 7919 | year = 1997 | bibcode = 1997JPhA...30.7919H }} and for the general case as well.{{Cite journal | last1 = Mallik | first1 = R. K. | doi = 10.1016/S0024-3795(00)00262-7 | title = The inverse of a tridiagonal matrix | journal = Linear Algebra and Its Applications | volume = 325 | pages = 109–139 | year = 2001 | issue = 1–3 | doi-access = free }}{{Cite journal | last1 = Kılıç | first1 = E. | doi = 10.1016/j.amc.2007.07.046 | title = Explicit formula for the inverse of a tridiagonal matrix by backward continued fractions | journal = Applied Mathematics and Computation | volume = 197 | pages = 345–357 | year = 2008 }}

In general, the inverse of a tridiagonal matrix is a semiseparable matrix and vice versa.{{cite book|author1=Raf Vandebril|author2=Marc Van Barel|author3=Nicola Mastronardi|title=Matrix Computations and Semiseparable Matrices. Volume I: Linear Systems|year=2008|publisher=JHU Press|isbn=978-0-8018-8714-7|at=Theorem 1.38, p. 41}} The inverse of a symmetric tridiagonal matrix can be written as a single-pair matrix (a.k.a. generator-representable semiseparable matrix) of the form{{cite journal |last1=Meurant |first1=Gerard |title=A review on the inverse of symmetric tridiagonal and block tridiagonal matrices |journal=SIAM Journal on Matrix Analysis and Applications |date=1992 |volume=13 |issue=3 |pages=707–728 |doi=10.1137/0613045 |url=https://doi.org/10.1137/0613045|url-access=subscription }}{{cite journal |last1=Bossu |first1=Sebastien |title=Tridiagonal and single-pair matrices and the inverse sum of two single-pair matrices |journal=Linear Algebra and Its Applications |date=2024 |volume=699 |pages=129–158 |doi=10.1016/j.laa.2024.06.018 |url=https://authors.elsevier.com/a/1jOTP5YnCtZEc|arxiv=2304.06100 }}

\begin{pmatrix}

\alpha_1 & -\beta_1 \\

-\beta_1 & \alpha_2 & -\beta_2 \\

& \ddots & \ddots & \ddots & \\

& & \ddots & \ddots & -\beta_{n-1} \\

& & & -\beta_{n-1} & \alpha_n

\end{pmatrix}^{-1} =

\begin{pmatrix}

a_1 b_1 & a_1 b_2 & \cdots & a_1 b_n \\

a_1 b_2 & a_2 b_2 & \cdots & a_2 b_n \\

\vdots & \vdots & \ddots & \vdots \\

a_1 b_n & a_2 b_n & \cdots & a_n b_n

\end{pmatrix}

= \left( a_{\min(i,j)} b_{\max(i,j)} \right)

where \begin{cases} \displaystyle a_i = \frac{\beta_{i} \cdots \beta_{n-1}}{\delta_i \cdots \delta_n\,b_n}

\\ \displaystyle

b_i = \frac{\beta_1 \cdots \beta_{i-1}}{d_1 \cdots d_i}\end{cases}

with \begin{cases}

d_n = \alpha_n,\quad d_{i-1} = \alpha_{i-1} - \frac{\beta_{i-1}^2}{d_{i}}, & i = n, n-1, \cdots, 2,

\\

\delta_1 = \alpha_1, \quad

\delta_{i+1} = \alpha_{i+1} - \frac{\beta_{i}^2}{\delta_{i}}, & i = 1, 2, \cdots, n-1.

\end{cases}

=Solution of linear system=

{{Main|tridiagonal matrix algorithm}}

A system of equations Ax = b for b\in \R^n can be solved by an efficient form of Gaussian elimination when A is tridiagonal called tridiagonal matrix algorithm, requiring O(n) operations.{{cite book|first1=Gene H.|last1=Golub|author-link1=Gene H. Golub |first2=Charles F. |last2=Van Loan|author-link2=Charles F. Van Loan| title =Matrix Computations| edition=3rd|publisher=The Johns Hopkins University Press|year=1996|isbn=0-8018-5414-8}}

=Eigenvalues=

When a tridiagonal matrix is also Toeplitz, there is a simple closed-form solution for its eigenvalues, namely:{{Cite journal | doi = 10.1002/nla.1811| title = Tridiagonal Toeplitz matrices: Properties and novel applications| journal = Numerical Linear Algebra with Applications| volume = 20| issue = 2| pages = 302| year = 2013| last1 = Noschese | first1 = S. | last2 = Pasquini | first2 = L. | last3 = Reichel | first3 = L. }}This can also be written as a + 2 \sqrt{bc} \cos(k \pi / {(n+1)}) because \cos(x) = -\cos(\pi-x) , as is done in: {{Cite journal | last1 = Kulkarni | first1 = D. | last2 = Schmidt | first2 = D. | last3 = Tsui | first3 = S. K. | title = Eigenvalues of tridiagonal pseudo-Toeplitz matrices | doi = 10.1016/S0024-3795(99)00114-7 | journal = Linear Algebra and Its Applications | volume = 297 | pages = 63–80 | year = 1999 | issue = 1–3 | url = https://hal.archives-ouvertes.fr/hal-01461924/file/KST.pdf }}

: a - 2 \sqrt{bc} \cos \left (\frac{k\pi}{n+1} \right ), \qquad k=1, \ldots, n.

A real symmetric tridiagonal matrix has real eigenvalues, and all the eigenvalues are distinct (simple) if all off-diagonal elements are nonzero.{{Cite book | last1 = Parlett | first1 = B.N. | title = The Symmetric Eigenvalue Problem |orig-year = 1980 | publisher =SIAM |date=1997 |oclc= 228147822 |series=Classics in applied mathematics |volume=20 |isbn=0-89871-402-8 }} Numerous methods exist for the numerical computation of the eigenvalues of a real symmetric tridiagonal matrix to arbitrary finite precision, typically requiring O(n^2) operations for a matrix of size n\times n, although fast algorithms exist which (without parallel computation) require only O(n\log n).{{Cite journal |last1 = Coakley |first1= E.S. |last2=Rokhlin | first2=V. | title =A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices | doi = 10.1016/j.acha.2012.06.003 |journal = Applied and Computational Harmonic Analysis |volume = 34 |issue = 3 |pages = 379–414 |year =2012 |doi-access = free }}

As a side note, an unreduced symmetric tridiagonal matrix is a matrix containing non-zero off-diagonal elements of the tridiagonal, where the eigenvalues are distinct while the eigenvectors are unique up to a scale factor and are mutually orthogonal.{{cite thesis |last1=Dhillon |first1=Inderjit Singh |title=A New O(n2) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem |page=8 |type=PhD |publisher=University of California, Berkeley |id=CSD-97-971, ADA637073 |date=1997 |url=http://www.cs.utexas.edu/~inderjit/public_papers/thesis.pdf}}

= Similarity to symmetric tridiagonal matrix =

For unsymmetric or nonsymmetric tridiagonal matrices one can compute the eigendecomposition using a similarity transformation.

Given a real tridiagonal, nonsymmetric matrix

:

T = \begin{pmatrix}

a_1 & b_1 \\

c_1 & a_2 & b_2 \\

& c_2 & \ddots & \ddots \\

& & \ddots & \ddots & b_{n-1} \\

& & & c_{n-1} & a_n

\end{pmatrix}

where b_i \neq c_i .

Assume that each product of off-diagonal entries is {{em|strictly}} positive b_i c_i > 0 and define a transformation matrix D by{{Cite journal | last1 = Kreer | first1 = M. | title = Analytic birth-death processes: a Hilbert space approach | doi = 10.1016/0304-4149(94)90112-0 | journal = Stochastic Processes and Their Applications | volume = 49 | issue = 1 | pages = 65–74 | year = 1994 }}

:

D := \operatorname{diag}(\delta_1 , \dots, \delta_n)

\quad \text{for} \quad

\delta_i :=

\begin{cases}

1 & , \, i=1

\\

\sqrt{\frac{c_{i-1} \dots c_1}{b_{i-1} \dots b_1}} & , \, i=2,\dots,n \,.

\end{cases}

The similarity transformation D^{-1} T D yields a symmetric tridiagonal matrix J by:{{cite web |first=Gérard |last=Meurant |title=Tridiagonal matrices |date=October 2008 |url=http://www.math.hkbu.edu.hk/ICM/LecturesAndSeminars/08OctMaterials/1/Slide3.pdf |via=Institute for Computational Mathematics, Hong Kong Baptist University}}

:

J:=D^{-1} T D

= \begin{pmatrix}

a_1 & \sgn b_1 \, \sqrt{b_1 c_1} \\

\sgn b_1 \, \sqrt{b_1 c_1} & a_2 & \sgn b_2 \, \sqrt{b_2 c_2} \\

& \sgn b_2 \, \sqrt{b_2 c_2} & \ddots & \ddots \\

& & \ddots & \ddots & \sgn b_{n-1} \, \sqrt{b_{n-1} c_{n-1}} \\

& & & \sgn b_{n-1} \, \sqrt{b_{n-1} c_{n-1}} & a_n

\end{pmatrix} \,.

Note that T and J have the same eigenvalues.

Computer programming

A transformation that reduces a general matrix to Hessenberg form will reduce a Hermitian matrix to tridiagonal form. So, many eigenvalue algorithms, when applied to a Hermitian matrix, reduce the input Hermitian matrix to (symmetric real) tridiagonal form as a first step.{{Cite journal |last1=Eidelman |first1=Yuli |last2=Gohberg |first2=Israel |last3=Gemignani |first3=Luca |date=2007-01-01 |title=On the fast reduction of a quasiseparable matrix to Hessenberg and tridiagonal forms |journal=Linear Algebra and Its Applications |language=en |volume=420 |issue=1 |pages=86–101 |doi=10.1016/j.laa.2006.06.028 |issn=0024-3795|doi-access=free }}

A tridiagonal matrix can also be stored more efficiently than a general matrix by using a special storage scheme. For instance, the LAPACK Fortran package stores an unsymmetric tridiagonal matrix of order n in three one-dimensional arrays, one of length n containing the diagonal elements, and two of length n − 1 containing the subdiagonal and superdiagonal elements.

Applications

The discretization in space of the one-dimensional diffusion or heat equation

:\frac{\partial u(t,x)}{\partial t} = \alpha \frac{\partial^2 u(t,x)}{\partial x^2}

using second order central finite differences results in

:

\begin{pmatrix}

\frac{\partial u_{1}(t)}{\partial t} \\

\frac{\partial u_{2}(t)}{\partial t} \\

\vdots \\

\frac{\partial u_{N}(t)}{\partial t}

\end{pmatrix}

= \frac{\alpha}{\Delta x^2} \begin{pmatrix}

-2 & 1 & 0 & \ldots & 0 \\

1 & -2 & 1 & \ddots & \vdots \\

0 & \ddots & \ddots & \ddots & 0 \\

\vdots & & 1 & -2 & 1 \\

0 & \ldots & 0 & 1 & -2

\end{pmatrix}

\begin{pmatrix}

u_{1}(t) \\

u_{2}(t) \\

\vdots \\

u_{N}(t) \\

\end{pmatrix}

with discretization constant \Delta x. The matrix is tridiagonal with a_{i}=-2 and b_{i}=c_{i}=1. Note: no boundary conditions are used here.

See also

Notes