linear span

{{Short description|In linear algebra, generated subspace}}

File:Basis for a plane.svg.]]

In mathematics, the linear span (also called the linear hull{{Harvard citation text|Encyclopedia of Mathematics|2020}}. Linear Hull. or just span) of a set S of elements of a vector space V is the smallest linear subspace of V that contains S. It is the set of all finite linear combinations of the elements of {{mvar|S}},{{Harvard citation text|Axler|2015}} p. 29, § 2.7 and the intersection of all linear subspaces that contain S. It is often denoted {{math|span(S)}}{{Harvard citation text|Axler|2015}} pp. 29-30, §§ 2.5, 2.8 or \langle S \rangle.

For example, in geometry, two linearly independent vectors span a plane.

To express that a vector space {{mvar|V}} is a linear span of a subset {{mvar|S}}, one commonly uses one of the following phrases: {{mvar|S}} spans {{mvar|V}}; {{mvar|S}} is a spanning set of {{mvar|V}}; {{mvar|V}} is spanned or generated by {{mvar|S}}; {{mvar|S}} is a generator set or a generating set of {{mvar|V}}.

Spans can be generalized to many mathematical structures, in which case, the smallest substructure containing S is generally called the substructure generated by S.

Definition

Given a vector space {{mvar|V}} over a field {{mvar|K}}, the span of a set {{mvar|S}} of vectors (not necessarily finite) is defined to be the intersection {{mvar|W}} of all subspaces of {{mvar|V}} that contain {{mvar|S}}. It is thus the smallest (for set inclusion) subspace containing {{mvar|S}}. It is referred to as the subspace spanned by {{mvar|S}}, or by the vectors in {{mvar|S}}. Conversely, {{mvar|S}} is called a spanning set of {{mvar|W}}, and we say that {{mvar|S}} spans {{mvar|W}}.

It follows from this definition that the span of {{mvar|S}} is the set of all finite linear combinations of elements (vectors) of {{mvar|S}}, and can be defined as such.{{Harvard citation text|Hefferon|2020}} p. 100, ch. 2, Definition 2.13{{Harvard citation text|Axler|2015}} pp. 29-30, §§ 2.5, 2.8{{Harvard citation text|Roman|2005}} pp. 41-42 That is, \operatorname{span}(S) = \biggl \{ \lambda_1 \mathbf v_1 + \lambda_2 \mathbf v_2 + \cdots + \lambda_n \mathbf v_n \mid n \in \N,\; \mathbf v_1,...\mathbf v_n \in S, \; \lambda_1,...\lambda_n \in K \biggr \}

When {{mvar|S}} is empty, the only possibility is {{math|1=n = 0}}, and the previous expression for \operatorname{span}(S) reduces to the empty sum.{{efn| This is logically valid as when {{math|1= n = 0}}, the conditions for the vectors and constants are empty, and therefore vacuously satisfied.}} The standard convention for the empty sum implies thus \text{span}(\empty) = \{\mathbf 0\}, a property that is immediate with the other definitions. However, many introductory textbooks simply include this fact as part of the definition.

When S=\{\mathbf v_1,\ldots, \mathbf v_n\} is finite, one has

\operatorname{span}(S) = \{ \lambda_1 \mathbf v_1 + \lambda_2 \mathbf v_2 + \cdots + \lambda_n \mathbf v_n \mid \lambda_1,...\lambda_n \in K \}

Examples

The real vector space \mathbb R^3 has {(−1, 0, 0), (0, 1, 0), (0, 0, 1)} as a spanning set. This particular spanning set is also a basis. If (−1, 0, 0) were replaced by (1, 0, 0), it would also form the canonical basis of \mathbb R^3.

Another spanning set for the same space is given by {(1, 2, 3), (0, 1, 2), (−1, {{frac|1|2}}, 3), (1, 1, 1)}, but this set is not a basis, because it is linearly dependent.

The set {{math|{(1, 0, 0), (0, 1, 0), (1, 1, 0)}}} is not a spanning set of \mathbb R^3, since its span is the space of all vectors in \mathbb R^3 whose last component is zero. That space is also spanned by the set {(1, 0, 0), (0, 1, 0)}, as (1, 1, 0) is a linear combination of (1, 0, 0) and (0, 1, 0). Thus, the spanned space is not \mathbb R^3. It can be identified with \mathbb R^2 by removing the third components equal to zero.

The empty set is a spanning set of {(0, 0, 0)}, since the empty set is a subset of all possible vector spaces in \mathbb R^3, and {(0, 0, 0)} is the intersection of all of these vector spaces.

The set of monomials {{mvar|xn}}, where {{mvar|n}} is a non-negative integer, spans the space of polynomials.

Theorems

= Equivalence of definitions =

The set of all linear combinations of a subset {{mvar|S}} of {{mvar|V}}, a vector space over {{mvar|K}}, is the smallest linear subspace of {{mvar|V}} containing {{mvar|S}}.

:Proof. We first prove that {{math|span S}} is a subspace of {{mvar|V}}. Since {{mvar|S}} is a subset of {{mvar|V}}, we only need to prove the existence of a zero vector {{math|0}} in {{math|span S}}, that {{math|span S}} is closed under addition, and that {{math|span S}} is closed under scalar multiplication. Letting S = \{ \mathbf v_1, \mathbf v_2, \ldots, \mathbf v_n \}, it is trivial that the zero vector of {{mvar|V}} exists in {{math|span S}}, since \mathbf 0 = 0 \mathbf v_1 + 0 \mathbf v_2 + \cdots + 0 \mathbf v_n. Adding together two linear combinations of {{mvar|S}} also produces a linear combination of {{mvar|S}}: (\lambda_1 \mathbf v_1 + \cdots + \lambda_n \mathbf v_n) + (\mu_1 \mathbf v_1 + \cdots + \mu_n \mathbf v_n) = (\lambda_1 + \mu_1) \mathbf v_1 + \cdots + (\lambda_n + \mu_n) \mathbf v_n, where all \lambda_i, \mu_i \in K, and multiplying a linear combination of {{mvar|S}} by a scalar c \in K will produce another linear combination of {{mvar|S}}: c(\lambda_1 \mathbf v_1 + \cdots + \lambda_n \mathbf v_n) = c\lambda_1 \mathbf v_1 + \cdots + c\lambda_n \mathbf v_n. Thus {{math|span S}} is a subspace of {{mvar|V}}.

:It follows that S \subseteq \operatorname{span} S, since every {{math|vi}} is a linear combination of {{mvar|S}} (trivially). Suppose that {{mvar|W}} is a linear subspace of {{mvar|V}} containing {{mvar|S}}. Since {{mvar|W}} is closed under addition and scalar multiplication, then every linear combination \lambda_1 \mathbf v_1 + \cdots + \lambda_n \mathbf v_n must be contained in {{mvar|W}}. Thus, {{math|span S}} is contained in every subspace of {{mvar|V}} containing {{mvar|S}}, and the intersection of all such subspaces, or the smallest such subspace, is equal to the set of all linear combinations of {{mvar|S}}.

= Size of spanning set is at least size of linearly independent set =

Every spanning set {{mvar|S}} of a vector space {{mvar|V}} must contain at least as many elements as any linearly independent set of vectors from {{mvar|V}}.

:Proof. Let S = \{ \mathbf v_1, \ldots, \mathbf v_m \} be a spanning set and W = \{ \mathbf w_1, \ldots, \mathbf w_n \} be a linearly independent set of vectors from {{mvar|V}}. We want to show that m \geq n.

:Since {{mvar|S}} spans {{mvar|V}}, then S \cup \{ \mathbf w_1 \} must also span {{mvar|V}}, and \mathbf w_1 must be a linear combination of {{mvar|S}}. Thus S \cup \{ \mathbf w_1 \} is linearly dependent, and we can remove one vector from {{mvar|S}} that is a linear combination of the other elements. This vector cannot be any of the {{math|wi}}, since {{mvar|W}} is linearly independent. The resulting set is \{ \mathbf w_1, \mathbf v_1, \ldots, \mathbf v_{i-1}, \mathbf v_{i+1}, \ldots, \mathbf v_m \}, which is a spanning set of {{mvar|V}}. We repeat this step {{mvar|n}} times, where the resulting set after the {{mvar|p}}th step is the union of \{ \mathbf w_1, \ldots, \mathbf w_p \} and {{mvar|m - p}} vectors of {{mvar|S}}.

:It is ensured until the {{mvar|n}}th step that there will always be some {{math|vi}} to remove out of {{mvar|S}} for every adjoint of {{math|v}}, and thus there are at least as many {{math|vi}}'s as there are {{math|wi}}'s—i.e. m \geq n. To verify this, we assume by way of contradiction that m < n. Then, at the {{mvar|m}}th step, we have the set \{ \mathbf w_1, \ldots, \mathbf w_m \} and we can adjoin another vector \mathbf w_{m+1}. But, since \{ \mathbf w_1, \ldots, \mathbf w_m \} is a spanning set of {{mvar|V}}, \mathbf w_{m+1} is a linear combination of \{ \mathbf w_1, \ldots, \mathbf w_m \}. This is a contradiction, since {{mvar|W}} is linearly independent.

= Spanning set can be reduced to a basis =

Let {{mvar|V}} be a finite-dimensional vector space. Any set of vectors that spans {{mvar|V}} can be reduced to a basis for {{mvar|V}}, by discarding vectors if necessary (i.e. if there are linearly dependent vectors in the set). If the axiom of choice holds, this is true without the assumption that {{mvar|V}} has finite dimension. This also indicates that a basis is a minimal spanning set when {{mvar|V}} is finite-dimensional.

Generalizations

Generalizing the definition of the span of points in space, a subset {{mvar|X}} of the ground set of a matroid is called a spanning set if the rank of {{mvar|X}} equals the rank of the entire ground set{{sfnp|Oxley|2011|p=28}}

The vector space definition can also be generalized to modules.{{Harvard citation text|Roman|2005}} p. 96, ch. 4{{Harvard citation text|Mac Lane|Birkhoff|1999}} p. 193, ch. 6 Given an {{mvar|R}}-module {{mvar|A}} and a collection of elements {{math|a1}}, ..., {{math|an}} of {{mvar|A}}, the submodule of {{mvar|A}} spanned by {{math|a1}}, ..., {{math|an}} is the sum of cyclic modules

Ra_1 + \cdots + Ra_n = \left\{ \sum_{k=1}^n r_k a_k \bigg| r_k \in R \right\}

consisting of all R-linear combinations of the elements {{math|ai}}. As with the case of vector spaces, the submodule of A spanned by any subset of A is the intersection of all submodules containing that subset.

Closed linear span (functional analysis)

In functional analysis, a closed linear span of a set of vectors is the minimal closed set which contains the linear span of that set.

Suppose that {{mvar|X}} is a normed vector space and let {{mvar|E}} be any non-empty subset of {{mvar|X}}. The closed linear span of {{mvar|E}}, denoted by \overline{\operatorname{Sp}}(E) or \overline{\operatorname{Span}}(E), is the intersection of all the closed linear subspaces of {{mvar|X}} which contain {{mvar|E}}.

One mathematical formulation of this is

:\overline{\operatorname{Sp}}(E) = \{u\in X | \forall\varepsilon > 0\,\exists x\in\operatorname{Sp}(E) : \|x - u\|<\varepsilon\}.

The closed linear span of the set of functions xn on the interval [0, 1], where n is a non-negative integer, depends on the norm used. If the L2 norm is used, then the closed linear span is the Hilbert space of square-integrable functions on the interval. But if the maximum norm is used, the closed linear span will be the space of continuous functions on the interval. In either case, the closed linear span contains functions that are not polynomials, and so are not in the linear span itself. However, the cardinality of the set of functions in the closed linear span is the cardinality of the continuum, which is the same cardinality as for the set of polynomials.

= Notes =

The linear span of a set is dense in the closed linear span. Moreover, as stated in the lemma below, the closed linear span is indeed the closure of the linear span.

Closed linear spans are important when dealing with closed linear subspaces (which are themselves highly important, see Riesz's lemma).

= A useful lemma =

Let {{mvar|X}} be a normed space and let {{mvar|E}} be any non-empty subset of {{mvar|X}}. Then

{{ordered list

|list-style-type=lower-alpha

| \overline{\operatorname{Sp}}(E) is a closed linear subspace of X which contains E,

| \overline{\operatorname{Sp}}(E) = \overline{\operatorname{Sp}(E)}, viz. \overline{\operatorname{Sp}}(E) is the closure of \operatorname{Sp}(E),

| E^\perp = (\operatorname{Sp}(E))^\perp = \left(\overline{\operatorname{Sp}(E)}\right)^\perp.

| (E^\perp)^\perp = ((\operatorname{Sp}(E))^\perp)^\perp = \overline{\operatorname{Sp}(E)}.

}}

(So the usual way to find the closed linear span is to find the linear span first, and then the closure of that linear span.)

See also

Footnotes

{{notelist}}

Citations

Sources

= Textbooks =

  • {{Cite book |last=Axler |first=Sheldon Jay |url=https://linear.axler.net/LADR4e.pdf#page=43 |title=Linear Algebra Done Right |publisher= Springer |year=2015 |isbn=978-3-319-11079-0 |edition=3rd |author-link=Sheldon Axler}}
  • {{Cite book |last=Hefferon |first=Jim |url=https://www.cs.ox.ac.uk/files/12921/book.pdf#page=110 |title=Linear Algebra |publisher=Orthogonal Publishing |year=2020 |isbn=978-1-944325-11-4 |edition=4th |author-link=Jim Hefferon}}
  • {{Cite book |last1=Mac Lane |first1=Saunders |title=Algebra |last2=Birkhoff |first2=Garrett |publisher=AMS Chelsea Publishing |year=1999 |isbn=978-0821816462 |edition=3rd |author-link=Saunders Mac Lane |author-link2=Garrett Birkhoff |orig-year=1988}}
  • {{cite book

| last = Oxley | first = James G. | authorlink = James Oxley

| edition = 2nd

| isbn = 9780199202508

| publisher = Oxford University Press

| series = Oxford Graduate Texts in Mathematics

| title = Matroid Theory

| volume = 3

| year = 2011}}

  • {{Cite book |last=Roman |first=Steven |url=http://matematicas.uis.edu.co/sites/default/files/paginas/archivos/Advanced%20Linear%20Algebra%20-%20Steven%20Roman.pdf#page=56 |title=Advanced Linear Algebra |publisher=Springer |year=2005 |isbn=0-387-24766-1 |edition=2nd |author-link=Steven Roman}}
  • {{Cite book|last1=Rynne|first1=Brian P.|title=Linear Functional Analysis|last2=Youngson|first2=Martin A.|publisher=Springer|year=2008|isbn=978-1848000049|location=|pages=}}
  • Lay, David C. (2021) Linear Algebra and Its Applications (6th Edition). Pearson.

= Web =

  • {{cite web|last1=Lankham|first1=Isaiah|last2=Nachtergaele|first2=Bruno|author2-link=Bruno Nachtergaele|last3=Schilling|first3=Anne|author3-link=Anne Schilling|date=13 February 2010|title=Linear Algebra - As an Introduction to Abstract Mathematics|url=https://www.math.ucdavis.edu/~anne/linear_algebra/mat67_course_notes.pdf|access-date=27 September 2011|publisher=University of California, Davis}}
  • {{Cite web|last=Weisstein|first=Eric Wolfgang|author-link=Eric W. Weisstein|title=Vector Space Span|url=https://mathworld.wolfram.com/VectorSpaceSpan.html|access-date=16 Feb 2021|website=MathWorld|ref=CITEREFMathWorld2021}}
  • {{Cite web|date=5 April 2020|title=Linear hull|url=https://encyclopediaofmath.org/wiki/Linear_hull|access-date=16 Feb 2021|website=Encyclopedia of Mathematics|ref=CITEREFEncyclopedia_of_Mathematics2020}}