Matrix-free methods
In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products.{{Citation | last1=Langville | first1=Amy N. |author1-link= Amy Langville |last2=Meyer | first2=Carl D. | title=Google's PageRank and beyond: the science of search engine rankings | publisher=Princeton University Press | isbn=978-0-691-12202-1 | year=2006 | page=40}} Such methods can be preferable when the matrix is so big that storing and manipulating it would cost a lot of memory and computing time, even with the use of methods for sparse matrices. Many iterative methods allow for a matrix-free implementation, including:
- the power method,
- the Lanczos algorithm,{{Citation |doi=10.1016/0024-3795(93)90235-G |title=Solving linear equations over GF(2): Block Lanczos algorithm |year=1993 |last1=Coppersmith |first1=Don |journal=Linear Algebra and Its Applications |volume=192 |pages=33–60|doi-access=free }}
- Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG),{{Cite journal| doi = 10.1137/S1064827500366124| title = Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method| journal = SIAM Journal on Scientific Computing| volume = 23| issue = 2| pages = 517–541| year = 2001| last1 = Knyazev | first1 = Andrew V. | bibcode = 2001SJSC...23..517K| citeseerx = 10.1.1.34.2862}}
- Wiedemann's coordinate recurrence algorithm,{{Citation |url=http://www.enseignement.polytechnique.fr/profs/informatique/Francois.Morain/Master1/Crypto/projects/Wiedemann86.pdf|doi=10.1109/TIT.1986.1057137 |title=Solving sparse linear equations over finite fields |year=1986 |last1=Wiedemann |first1=D. |journal=IEEE Transactions on Information Theory |volume=32 |pages=54–62}}
- the conjugate gradient method,{{Citation | doi=10.1007/3-540-38424-3_8 | chapter=Solving Large Sparse Linear Systems Over Finite Fields | title=Advances in Cryptology – CRYPT0' 90 | series=Lecture Notes in Computer Science | year=1991 | last1=Lamacchia | first1=B. A. | last2=Odlyzko | first2=A. M. | isbn=978-3-540-54508-8 | volume=537 | pages=109| doi-access=free }}
- Krylov subspace methods.
Distributed solutions have also been explored using coarse-grain parallel software systems to achieve homogeneous solutions of linear systems.{{Citation | last1=Kaltofen | first1=E. | last2=Lobo | first2=A. | title=Distributed Matrix-Free Solution of Large Sparse Linear Systems over Finite Fields | year=1996 | periodical=Algorithmica | volume=24 | pages=311–348 |doi=10.1007/PL00008266 | issue=3–4| citeseerx=10.1.1.17.7470 | s2cid=13305650 }}
It is generally used in solving non-linear equations like Euler's equations in computational fluid dynamics. Matrix-free conjugate gradient method has been applied in the non-linear elasto-plastic finite element solver.{{cite journal |last1=Prabhune |first1=Bhagyashree C. |author2-link=Krishnan Suresh |last2=Krishnan |first2=Suresh |title=A fast matrix-free elasto-plastic solver for predicting residual stresses in additive manufacturing |journal=Computer Aided Design |date=4 March 2020 |volume=123 |page=102829 |doi=10.1016/j.cad.2020.102829 |doi-access=free }} Solving these equations requires the calculation of the Jacobian which is costly in terms of CPU time and storage. To avoid this expense, matrix-free methods are employed. In order to remove the need to calculate the Jacobian, the Jacobian vector product is formed instead, which is in fact a vector itself. Manipulating and calculating this vector is easier than working with a large matrix or linear system.