musical isomorphism

{{Short description|Isomorphism between the tangent and cotangent bundles of a manifold.}}

{{refimprove|date=April 2015}}

In mathematics—more specifically, in differential geometry—the musical isomorphism (or canonical isomorphism) is an isomorphism between the tangent bundle \mathrm{T}M and the cotangent bundle \mathrm{T}^* M of a Riemannian or pseudo-Riemannian manifold induced by its metric tensor. There are similar isomorphisms on symplectic manifolds. The term musical refers to the use of the musical notation symbols \flat (flat) and \sharp (sharp).{{sfn|Lee|2003|loc=Chapter 11}}{{sfn|Lee|1997|loc=Chapter 3}}

In the notation of Ricci calculus and mathematical physics, the idea is expressed as the raising and lowering of indices. Raising and lowering indices are a form of index manipulation in tensor expressions.

In certain specialized applications, such as on Poisson manifolds, the relationship may fail to be an isomorphism at singular points, and so, for these cases, is technically only a homomorphism.

Motivation

In linear algebra, a finite-dimensional vector space is isomorphic to its dual space, but not canonically isomorphic to it. On the other hand, a finite-dimensional vector space V endowed with a non-degenerate bilinear form \langle\cdot,\cdot\rangle, is canonically isomorphic to its dual. The canonical isomorphism V \to V^* is given by

: v \mapsto \langle v, \cdot \rangle.

The non-degeneracy of \langle\cdot,\cdot\rangle means exactly that the above map is an isomorphism.

An example is where V = \mathbb R^n, and \langle\cdot,\cdot\rangle is the dot product.

The musical isomorphisms are the global version of this isomorphism and its inverse for the tangent bundle and cotangent bundle of a (pseudo-)Riemannian manifold (M,g). They are canonical isomorphisms of vector bundles which are at any point {{math|p}} the above isomorphism applied to the tangent space of {{math|M}} at {{math|p}} endowed with the inner product g_p.

Because every paracompact manifold can be (non-canonically) endowed with a Riemannian metric, the musical isomorphisms show that a vector bundle on a paracompact manifold is (non-canonically) isomorphic to its dual.

Discussion

Let {{math|(M, g)}} be a (pseudo-)Riemannian manifold. At each point {{mvar|p}}, the map {{math|g{{sub|p}}}} is a non-degenerate bilinear form on the tangent space {{math|T{{sub|p}}M}}. If {{mvar|v}} is a vector in {{math|T{{sub|p}}M}}, its flat is the covector

: v^\flat = g_p(v,\cdot)

in {{math|T{{su|lh=1em|p=∗|b=p }}M}}. Since this is a smooth map that preserves the point {{mvar|p}}, it defines a morphism of smooth vector bundles \flat : \mathrm{T}M \to \mathrm{T}^*M. By non-degeneracy of the metric, \flat has an inverse \sharp at each point, characterized by

: g_p(\alpha^\sharp, v) = \alpha(v)

for {{mvar|α}} in {{math|T{{su|lh=1em|p=∗|b=p }}M}} and {{mvar|v}} in {{math|T{{sub|p}}M}}. The vector \alpha^\sharp is called the sharp of {{mvar|α}}. The sharp map is a smooth bundle map \sharp : \mathrm{T}^*M \to \mathrm{T}M.

Flat and sharp are mutually inverse isomorphisms of smooth vector bundles, hence, for each {{mvar|p}} in {{mvar|M}}, there are mutually inverse vector space isomorphisms between {{math|T{{sub|p }}M}} and {{math|T{{su|lh=1em|p=∗|b=p }}M}}.

The flat and sharp maps can be applied to vector fields and covector fields by applying them to each point. Hence, if {{mvar|X}} is a vector field and {{mvar|ω}} is a covector field,

: X^\flat = g(X,\cdot)

and

: g(\omega^\sharp, X) = \omega(X).

= In a moving frame =

Suppose {{math|{e{{sub|i}}}{{null}}}} is a moving tangent frame (see also smooth frame) for the tangent bundle {{math|TM}} with, as dual frame (see also dual basis), the moving coframe (a moving tangent frame for the cotangent bundle \mathrm{T}^*M; see also coframe) {{math|{e{{sup|i}}}{{null}}}}. Then the pseudo-Riemannian metric, which is a symmetric and nondegenerate {{math|2}}-covariant tensor field can be written locally in terms of this coframe as {{math|g {{=}} g{{sub|ij }}e{{sup|i}} ⊗ e{{i sup|j}}}} using Einstein summation notation.

Given a vector field {{math|X {{=}} X{{i sup|i }}e{{sub|i}}}} and denoting {{math|1=g{{sub|ij}} X{{i sup|i}} = X{{sub|j}}}}, its flat is

: X^\flat = g_{ij} X^i \mathbf{e}^j = X_j \mathbf{e}^j.

This is referred to as lowering an index.

In the same way, given a covector field {{math|1=ω = ω{{sub|i}} e{{sup|i}}}} and denoting {{math|1=g{{sup|ij}} ω{{sub|i}} = ω{{sup|j}}}}, its sharp is

: \omega^\sharp = g^{ij} \omega_i \mathbf{e}_j = \omega^j \mathbf{e}_j,

where {{math|g{{i sup|ij}}}} are the components of the inverse metric tensor (given by the entries of the inverse matrix to {{math|g{{sub|ij}}}}). Taking the sharp of a covector field is referred to as raising an index.

=Extension to tensor products=

The musical isomorphisms may also be extended to the bundles

\bigotimes ^k {\rm T} M, \qquad \bigotimes ^k {\rm T}^* M .

Which index is to be raised or lowered must be indicated. For instance, consider the {{nowrap|(0, 2)}}-tensor field {{math|X {{=}} X{{sub|ij }}e{{sup|i}} ⊗ e{{i sup|j}}}}. Raising the second index, we get the {{nowrap|(1, 1)}}-tensor field

X^\sharp = g^{jk} X_{ij} \, {\rm e}^i \otimes {\rm e}_k .

=Extension to ''k''-vectors and ''k''-forms=

In the context of exterior algebra, an extension of the musical operators may be defined on {{math|⋀V}} and its dual {{math|⋀{{su|lh=1em|b=|p=∗}}V}}, which with minor abuse of notation may be denoted the same, and are again mutual inverses:{{sfn|Vaz|da Rocha|2016|loc=pp. 48, 50}}

\flat : {\bigwedge} V \to {\bigwedge}^* V , \qquad \sharp : {\bigwedge}^* V \to {\bigwedge} V ,

defined by

(X \wedge \ldots \wedge Z)^\flat = X^\flat \wedge \ldots \wedge Z^\flat , \qquad

(\alpha \wedge \ldots \wedge \gamma)^\sharp = \alpha^\sharp \wedge \ldots \wedge \gamma^\sharp .

In this extension, in which {{math|{{music|flat}}}} maps p-vectors to p-covectors and {{math|{{music|sharp}}}} maps p-covectors to p-vectors, all the indices of a totally antisymmetric tensor are simultaneously raised or lowered, and so no index need be indicated:

Y^\sharp = ( Y_{i \dots k} \mathbf{e}^i \otimes \dots \otimes \mathbf{e}^k)^\sharp = g^{ir} \dots g^{kt} \, Y_{i \dots k} \, \mathbf{e}_r \otimes \dots \otimes \mathbf{e}_t .

= Vector bundles with bundle metrics =

More generally, musical isomorphisms always exist between a vector bundle endowed with a bundle metric and its dual.

Trace of a tensor through a metric tensor

Given a type {{nowrap|(0, 2)}} tensor field {{math|X {{=}} X{{sub|ij }}e{{sup|i}} ⊗ e{{i sup|j}}}}, we define the trace of {{mvar|X}} through the metric tensor {{mvar|g}} by

\operatorname{tr}_g ( X ) := \operatorname{tr} ( X^\sharp ) = \operatorname{tr} ( g^{jk} X_{ij} \, {\bf e}^i \otimes {\bf e}_k ) = g^{ij} X_{ij} .

Observe that the definition of trace is independent of the choice of index to raise, since the metric tensor is symmetric.

Vectors, covectors and the metric

=Mathematical formulation=

Mathematically vectors are elements of a vector space V over a field K, and for use in physics V is usually defined with K=\mathbb{R} or \mathbb{C}. Concretely, if the dimension n=\text{dim}(V) of V is finite, then, after making a choice of basis, we can view such vector spaces as \mathbb{R}^n or \mathbb{C}^n.

The dual space is the space of linear functionals mapping V\rightarrow K. Concretely, in the case where the vector space has an inner product, in matrix notation these can be thought of as row vectors, which give a number when applied to column vectors. We denote this by V^*:= \text{Hom}(V,K), so that \alpha \in V^* is a linear map \alpha:V\rightarrow K.

Then under a choice of basis \{e_i\}, we can view vectors v\in V as a K^n vector with components v^i (vectors are taken by convention to have indices up). This picks out a choice of basis \{e^i\} for V^*, defined by the set of relations e^i(e_j) = \delta^i_j.

For applications, raising and lowering is done using a structure known as the (pseudo{{nbh}})metric tensor (the 'pseudo-' refers to the fact we allow the metric to be indefinite). Formally, this is a non-degenerate, symmetric bilinear form

:g:V\times V\rightarrow K \text{ a bilinear form}

:g(u,v) = g(v,u) \text{ for all }u,v\in V \text{ (Symmetric)}

:\forall v\in V \text{ s.t. } v\neq \vec{0}, \exists u\in V \text{ such that } g(v,u)\neq 0 \text{ (Non-degenerate)}

In this basis, it has components g(e_i,e_j) = g_{ij}, and can be viewed as a symmetric matrix in \text{Mat}_{n\times n}(K) with these components. The inverse metric exists due to non-degeneracy and is denoted g^{ij}, and as a matrix is the inverse to g_{ij}.

=Raising and lowering vectors and covectors=

Raising and lowering is then done in coordinates. Given a vector with components v^i, we can contract with the metric to obtain a covector:

:g_{ij}v^j = v_i

and this is what we mean by lowering the index. Conversely, contracting a covector with the inverse metric gives a vector:

:g^{ij}\alpha_j=\alpha^i.

This process is called raising the index.

Raising and then lowering the same index (or conversely) are inverse operations, which is reflected in the metric and inverse metric tensors being inverse to each other (as is suggested by the terminology):

:g^{ij}g_{jk}=g_{kj}g^{ji}={\delta^i}_k={\delta_k}^i

where \delta^i_j is the Kronecker delta or identity matrix.

Finite-dimensional real vector spaces with (pseudo-)metrics are classified up to signature, a coordinate-free property which is well-defined by Sylvester's law of inertia. Possible metrics on real space are indexed by signature (p,q). This is a metric associated to n=p+q dimensional real space. The metric has signature (p,q) if there exists a basis (referred to as an orthonormal basis) such that in this basis, the metric takes the form (g_{ij}) = \text{diag}(+1, \cdots, +1, -1, \cdots, -1) with p positive ones and q negative ones.

The concrete space with elements which are n-vectors and this concrete realization of the metric is denoted \mathbb{R}^{p,q}=(\mathbb{R}^n,g_{ij}), where the 2-tuple (\mathbb{R}^n, g_{ij}) is meant to make it clear that the underlying vector space of \mathbb{R}^{p,q} is \mathbb{R}^n: equipping this vector space with the metric g_{ij} is what turns the space into \mathbb{R}^{p,q}.

Examples:

  • \mathbb{R}^3 is a model for 3-dimensional space. The metric is equivalent to the standard dot product.
  • \mathbb{R}^{n,0} = \mathbb{R}^n, equivalent to n dimensional real space as an inner product space with g_{ij} = \delta_{ij}. In Euclidean space, raising and lowering is not necessary due to vectors and covector components being the same.
  • \mathbb{R}^{1,3} is Minkowski space (or rather, Minkowski space in a choice of orthonormal basis), a model for spacetime with weak curvature. It is common convention to use greek indices when writing expressions involving tensors in Minkowski space, while Latin indices are reserved for Euclidean space.

Well-formulated expressions are constrained by the rules of Einstein summation: any index may appear at most twice and furthermore a raised index must contract with a lowered index. With these rules we can immediately see that an expression such as

:g_{ij}v^iu^j

is well formulated while

:g_{ij}v_iu_j

is not.

=Example in Minkowski spacetime=

The covariant 4-position is given by

:X_\mu = (-ct, x, y, z)

with components:

:X_0 = -ct, \quad X_1 = x, \quad X_2 = y, \quad X_3 = z

(where {{mvar|x}},{{mvar|y}},{{mvar|z}} are the usual Cartesian coordinates) and the Minkowski metric tensor with metric signature (− + + +) is defined as

: \eta_{\mu \nu} = \eta^{\mu \nu} = \begin{pmatrix}

-1 & 0 & 0 & 0 \\

0 & 1 & 0 & 0 \\

0 & 0 & 1 & 0 \\

0 & 0 & 0 & 1

\end{pmatrix}

in components:

:\eta_{00} = -1, \quad \eta_{i0} = \eta_{0i} = 0,\quad \eta_{ij} = \delta_{ij}\,(i,j \neq 0).

To raise the index, multiply by the tensor and contract:

:X^\lambda = \eta^{\lambda\mu}X_\mu = \eta^{\lambda 0}X_0 + \eta^{\lambda i}X_i

then for {{math|λ {{=}} 0}}:

:X^0 = \eta^{00}X_0 + \eta^{0i}X_i = -X_0

and for {{math|λ {{=}} j {{=}} 1, 2, 3}}:

:X^j = \eta^{j0}X_0 + \eta^{ji}X_i = \delta^{ji}X_i = X_j \,.

So the index-raised contravariant 4-position is:

:X^\mu = (ct, x, y, z)\,.

This operation is equivalent to the matrix multiplication

: \begin{pmatrix}

-1 & 0 & 0 & 0 \\

0 & 1 & 0 & 0 \\

0 & 0 & 1 & 0 \\

0 & 0 & 0 & 1

\end{pmatrix} \begin{pmatrix}

-ct \\

x \\

y \\

z

\end{pmatrix} = \begin{pmatrix}

ct \\

x \\

y \\

z

\end{pmatrix}.

Given two vectors, X^\mu and Y^\mu, we can write down their (pseudo-)inner product in two ways:

:\eta_{\mu\nu}X^\mu Y^\nu.

By lowering indices, we can write this expression as

:X_\mu Y^\mu.

In matrix notation, the first expression can be written as

: \begin{pmatrix} X^0 & X^1 & X^2 & X^3 \end{pmatrix} \begin{pmatrix}

-1 & 0 & 0 & 0 \\

0 & 1 & 0 & 0 \\

0 & 0 & 1 & 0 \\

0 & 0 & 0 & 1

\end{pmatrix} \begin{pmatrix}

Y^0 \\

Y^1 \\

Y^2 \\

Y^3\end{pmatrix}

while the second is, after lowering the indices of X^\mu,

:\begin{pmatrix} -X^0 & X^1 & X^2 & X^3 \end{pmatrix}\begin{pmatrix}

Y^0 \\

Y^1 \\

Y^2 \\

Y^3\end{pmatrix}.

=Coordinate free formalism=

It is instructive to consider what raising and lowering means in the abstract linear algebra setting.

We first fix definitions: V is a finite-dimensional vector space over a field K. Typically K=\mathbb{R} or \mathbb{C}.

\phi is a non-degenerate bilinear form, that is,

\phi:V\times V\rightarrow K

is a map which is linear in both arguments, making it a bilinear form.

By \phi being non-degenerate we mean that for each v\in V such that v\neq 0, there is a u\in V such that

:\phi(v,u)\neq 0.

In concrete applications, \phi is often considered a structure on the vector space, for example an inner product or more generally a metric tensor which is allowed to have indefinite signature, or a symplectic form \omega. Together these cover the cases where \phi is either symmetric or anti-symmetric, but in full generality \phi need not be either of these cases.

There is a partial evaluation map associated to \phi,

:\phi(\cdot, - ):V\rightarrow V^*; v\mapsto \phi(v,\cdot)

where \cdot denotes an argument which is to be evaluated, and - denotes an argument whose evaluation is deferred. Then \phi(v,\cdot) is an element of V^*, which sends u\mapsto \phi(v,u).

We made a choice to define this partial evaluation map as being evaluated on the first argument. We could just as well have defined it on the second argument, and non-degeneracy is also independent of argument chosen. Also, when \phi has well defined (anti-)symmetry, evaluating on either argument is equivalent (up to a minus sign for anti-symmetry).

Non-degeneracy shows that the partial evaluation map is injective, or equivalently that the kernel of the map is trivial. In finite dimension, the dual space V^* has equal dimension to V, so non-degeneracy is enough to conclude the map is a linear isomorphism. If \phi is a structure on the vector space sometimes call this the canonical isomorphism V\rightarrow V^*.

It therefore has an inverse, \phi^{-1}:V^*\rightarrow V, and this is enough to define an associated bilinear form on the dual:

:\phi^{-1}:V^*\times V^*\rightarrow K, \phi^{-1}(\alpha,\beta) = \phi(\phi^{-1}(\alpha),\phi^{-1}(\beta)).

where the repeated use of \phi^{-1} is disambiguated by the argument taken. That is, \phi^{-1}(\alpha) is the inverse map, while \phi^{-1}(\alpha,\beta) is the bilinear form.

Checking these expressions in coordinates makes it evident that this is what raising and lowering indices means abstractly.

Tensors

We will not develop the abstract formalism for tensors straightaway. Formally, an (r,s) tensor is an object described via its components, and has r components up, s components down. A generic (r,s) tensor is written

:T^{\mu_1\cdots \mu_r}{}_{\nu_1\cdots \nu_s}.

We can use the metric tensor to raise and lower tensor indices just as we raised and lowered vector indices and raised covector indices.

=Examples=

  • A (0,0) tensor is a number in the field \mathbb{F}.
  • A (1,0) tensor is a vector.
  • A (0,1) tensor is a covector.
  • A (0,2) tensor is a bilinear form. An example is the metric tensor g_{\mu\nu}.
  • A (1,1) tensor is a linear map. An example is the delta, \delta^\mu{}_\nu, which is the identity map, or a Lorentz transformation \Lambda^\mu{}_\nu.

=Example of raising and lowering=

For a (0,2) tensor,{{cite book |title=Tensor Calculus |first=D. C. |last=Kay |series=Schaum’s Outlines |publisher=McGraw Hill |location=New York |year=1988 |isbn=0-07-033484-6 }} twice contracting with the inverse metric tensor and contracting in different indices raises each index:

:A^{\mu\nu}=g^{\mu\rho}g^{\nu\sigma}A_{\rho \sigma}.

Similarly, twice contracting with the metric tensor and contracting in different indices lowers each index:

:A_{\mu\nu}=g_{\mu\rho}g_{\nu\sigma}A^{\rho\sigma}

Let's apply this to the theory of electromagnetism.

The contravariant electromagnetic tensor in the {{math|(+ − − −)}} signature is given byNB: Some texts, such as: {{cite book | author=Griffiths, David J. | authorlink = David J. Griffiths | title=Introduction to Elementary Particles | publisher=Wiley, John & Sons, Inc | year=1987 | isbn=0-471-60386-4}}, will show this tensor with an overall factor of −1. This is because they used the negative of the metric tensor used here: {{math|(− + + +)}}, see metric signature. In older texts such as Jackson (2nd edition), there are no factors of {{mvar|c}} since they are using Gaussian units. Here SI units are used.

:F^{\alpha\beta} = \begin{pmatrix}

0 & -\frac{E_x}{c} & -\frac{E_y}{c} & -\frac{E_z}{c} \\

\frac{E_x}{c} & 0 & -B_z & B_y \\

\frac{E_y}{c} & B_z & 0 & -B_x \\

\frac{E_z}{c} & -B_y & B_x & 0

\end{pmatrix}.

In components,

:F^{0i} = -F^{i0} = - \frac{E^i}{c} ,\quad F^{ij} = - \varepsilon^{ijk} B_k

To obtain the covariant tensor {{mvar|Fαβ}}, contract with the inverse metric tensor:

:\begin{align}

F_{\alpha\beta} & = \eta_{\alpha\gamma} \eta_{\beta\delta} F^{\gamma\delta} \\

& = \eta_{\alpha 0} \eta_{\beta 0} F^{0 0} + \eta_{\alpha i} \eta_{\beta 0} F^{i 0}

+ \eta_{\alpha 0} \eta_{\beta i} F^{0 i} + \eta_{\alpha i} \eta_{\beta j} F^{i j}

\end{align}

and since {{math|F00 {{=}} 0 }} and {{math|F0i {{=}} − Fi0}}, this reduces to

:F_{\alpha\beta} = \left(\eta_{\alpha i} \eta_{\beta 0} - \eta_{\alpha 0} \eta_{\beta i} \right) F^{i 0} + \eta_{\alpha i} \eta_{\beta j} F^{i j}

Now for {{math|α {{=}} 0}}, {{math|β {{=}} k {{=}} 1, 2, 3}}:

:\begin{align}

F_{0k} & = \left(\eta_{0i} \eta_{k0} - \eta_{00} \eta_{ki} \right) F^{i0} + \eta_{0i} \eta_{kj} F^{ij} \\

& = \bigl(0 - (-\delta_{ki}) \bigr) F^{i0} + 0 \\

& = F^{k0} = - F^{0k} \\

\end{align}

and by antisymmetry, for {{math|α {{=}} k {{=}} 1, 2, 3}}, {{math|β {{=}} 0}}:

: F_{k0} = - F^{k0}

then finally for {{math|α {{=}} k {{=}} 1, 2, 3}}, {{math|β {{=}} l {{=}} 1, 2, 3}};

:\begin{align}

F_{kl} & = \left(\eta_{ k i} \eta_{ l 0} - \eta_{ k 0} \eta_{ l i} \right) F^{i 0} + \eta_{ k i} \eta_{ l j} F^{i j} \\

& = 0 + \delta_{ k i} \delta_{ l j} F^{i j} \\

& = F^{k l} \\

\end{align}

The (covariant) lower indexed tensor is then:

:F_{\alpha\beta} = \begin{pmatrix}

0 & \frac{E_x}{c} & \frac{E_y}{c} & \frac{E_z}{c} \\

-\frac{E_x}{c} & 0 & -B_z & B_y \\

-\frac{E_y}{c} & B_z & 0 & -B_x \\

-\frac{E_z}{c} & -B_y & B_x & 0

\end{pmatrix}

This operation is equivalent to the matrix multiplication

: \begin{pmatrix}

-1 & 0 & 0 & 0 \\

0 & 1 & 0 & 0 \\

0 & 0 & 1 & 0 \\

0 & 0 & 0 & 1

\end{pmatrix}

\begin{pmatrix}

0 & -\frac{E_x}{c} & -\frac{E_y}{c} & -\frac{E_z}{c} \\

\frac{E_x}{c} & 0 & -B_z & B_y \\

\frac{E_y}{c} & B_z & 0 & -B_x \\

\frac{E_z}{c} & -B_y & B_x & 0

\end{pmatrix}

\begin{pmatrix}

-1 & 0 & 0 & 0 \\

0 & 1 & 0 & 0 \\

0 & 0 & 1 & 0 \\

0 & 0 & 0 & 1

\end{pmatrix}

=\begin{pmatrix}

0 & \frac{E_x}{c} & \frac{E_y}{c} & \frac{E_z}{c} \\

-\frac{E_x}{c} & 0 & -B_z & B_y \\

-\frac{E_y}{c} & B_z & 0 & -B_x \\

-\frac{E_z}{c} & -B_y & B_x & 0

\end{pmatrix}.

=General rank=

For a tensor of order {{mvar|n}}, indices are raised by (compatible with above):

:g^{j_1i_1}g^{j_2i_2}\cdots g^{j_ni_n}A_{i_1i_2\cdots i_n} = A^{j_1j_2\cdots j_n}

and lowered by:

:g_{j_1i_1}g_{j_2i_2}\cdots g_{j_ni_n}A^{i_1i_2\cdots i_n} = A_{j_1j_2\cdots j_n}

and for a mixed tensor:

:g_{p_1i_1}g_{p_2i_2}\cdots g_{p_ni_n}g^{q_1j_1}g^{q_2j_2}\cdots g^{q_mj_m}{A^{i_1i_2\cdots i_n}}_{j_1j_2\cdots j_m} = {A_{p_1p_2\cdots p_n}}^{q_1q_2\cdots q_m}

We need not raise or lower all indices at once: it is perfectly fine to raise or lower a single index. Lowering an index of an (r,s) tensor gives a (r-1,s+1) tensor, while raising an index gives a (r+1,s-1) (where r,s have suitable values, for example we cannot lower the index of a (0,2) tensor.)

See also

Citations

{{Reflist}}

References

  • {{cite book|last=Lee|first=J. M.|title=Introduction to Smooth manifolds|year=2003|series=Springer Graduate Texts in Mathematics|isbn=0-387-95448-1|volume=218}}
  • {{cite book|last=Lee|first=J. M.|title=Riemannian Manifolds – An Introduction to Curvature|year=1997|publisher=Springer Verlag|series=Springer Graduate Texts in Mathematics|volume=176|isbn=978-0-387-98322-6}}
  • {{cite book|last1=Vaz|first1=Jayme|last2=da Rocha|first2=Roldão|year=2016|title=An Introduction to Clifford Algebras and Spinors |publisher=Oxford University Press|isbn=978-0-19-878-292-6}}

{{Riemannian geometry}}

{{tensors}}

{{Manifolds}}

Category:Differential geometry

Category:Riemannian geometry

Category:Riemannian manifolds

Category:Symplectic geometry