limit (mathematics)#Infinity in limits of functions

{{Short description|Value approached by a mathematical object}}

{{other uses|Limit (disambiguation)#Mathematics{{!}}Limit § Mathematics}}

In mathematics, a limit is the value that a function (or sequence) approaches as the argument (or index) approaches some value.{{cite book |last=Stewart |first=James |author-link=James Stewart (mathematician) |year=2008 |title=Calculus: Early Transcendentals |edition=6th |publisher=Brooks/Cole |isbn=978-0-495-01166-8 |url-access=registration |url=https://archive.org/details/calculusearlytra00stew_1 }} Limits of functions are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals.

The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory.

The limit inferior and limit superior provide generalizations of the concept of a limit which are particularly relevant when the limit at a point may not exist.

Notation

In formulas, a limit of a function is usually written as

: \lim_{x \to c} f(x) = L,

and is read as "the limit of {{math|f}} of {{mvar|x}} as {{mvar|x}} approaches {{mvar|c}} equals {{math|L}}". This means that the value of the function {{math|f}} can be made arbitrarily close to {{math|L}}, by choosing {{mvar|x}} sufficiently close to {{mvar|c}}. Alternatively, the fact that a function {{math|f}} approaches the limit {{math|L}} as {{mvar|x}} approaches {{mvar|c}} is sometimes denoted by a right arrow (→ or \rightarrow), as in

:f(x) \to L \text{ as } x \to c,

which reads "f of x tends to L as x tends to c".

History

According to Hankel (1871), the modern concept of limit originates from Proposition X.1 of Euclid's Elements, which forms the basis of the Method of exhaustion found in Euclid and Archimedes: "Two unequal magnitudes being set out, if from the greater there is subtracted a magnitude greater than its half, and from that which is left a magnitude greater than its half, and if this process is repeated continually, then there will be left some magnitude less than the lesser magnitude set out."{{cite book |last1=Schubring |first1=Gert |title=Conflicts between generalization, rigor, and intuition: number concepts underlying the development of analysis in 17th-19th century France and Germany |date=2005 |publisher=Springer |location=New York |isbn=0387228365 |pages=22–23}}{{cite web |title=Euclid's Elements, Book X, Proposition 1 |url=http://aleph0.clarku.edu/~djoyce/elements/bookX/propX1.html |website=aleph0.clarku.edu}}

Grégoire de Saint-Vincent gave the first definition of limit (terminus) of a geometric series in his work Opus Geometricum (1647): "The terminus of a progression is the end of the series, which none progression can reach, even not if she is continued in infinity, but which she can approach nearer than a given segment."{{Cite journal |last=Van Looy |first=Herman |date=1984 |title=A chronology and historical analysis of the mathematical manuscripts of Gregorius a Sancto Vincentio (1584–1667) |journal=Historia Mathematica |language=en |volume=11 |issue=1 |pages=57–75 |doi=10.1016/0315-0860(84)90005-3|doi-access=free }}

In the Scholium to Principia in 1687, Isaac Newton had a clear definition of a limit, stating that "Those ultimate ratios... are not actually ratios of ultimate quantities, but limits... which they can approach so closely that their difference is less than any given quantity".{{Cite book |last=Rowlands |first=Peter |url=https://books.google.com/books?id=ipA4DwAAQBAJ&pg=PA28 |title=Newton and the Great World System |date=2017 |publisher=World Scientific Publishing |isbn=978-1-78634-372-7 |pages=28 |language=en |doi=10.1142/q0108}}

The modern definition of a limit goes back to Bernard Bolzano who, in 1817, developed the basics of the epsilon-delta technique to define continuous functions. However, his work remained unknown to other mathematicians until thirty years after his death.{{Citation|title=Bolzano, Cauchy, Epsilon, Delta|last=Felscher|first=Walter|journal=American Mathematical Monthly|volume=107|issue=9|pages=844–862|year=2000|doi=10.2307/2695743|jstor=2695743}}

Augustin-Louis Cauchy in 1821,{{cite book |last1=Larson |first1=Ron |title=Calculus of a single variable |last2=Edwards |first2=Bruce H. |publisher=Brooks/Cole, Cengage Learning |year=2010 |isbn=978-0-547-20998-2 |edition=Ninth |author-link1=Ron Larson (mathematician)}} followed by Karl Weierstrass, formalized the definition of the limit of a function which became known as the (ε, δ)-definition of limit.

The modern notation of placing the arrow below the limit symbol is due to G. H. Hardy, who introduced it in his book A Course of Pure Mathematics in 1908.{{Citation|last=Miller|first=Jeff|title=Earliest Uses of Symbols of Calculus|date=1 December 2004|url=http://jeff560.tripod.com/calculus.html|access-date=2008-12-18|archive-date=2015-05-01|archive-url=https://web.archive.org/web/20150501123549/http://jeff560.tripod.com/calculus.html|url-status=live}}

Types of limits

= In sequences =

{{main|Limit of a sequence}}

== Real numbers ==

The expression 0.999... should be interpreted as the limit of the sequence 0.9, 0.99, 0.999, ... and so on. This sequence can be rigorously shown to have the limit 1, and therefore this expression is meaningfully interpreted as having the value 1.{{citation |last=Stillwell |first=John |title=Elements of algebra: geometry, numbers, equations |pages=42 |year=1994 |publisher=Springer |isbn=978-1441928399 |author-link=John Stillwell}}

Formally, suppose {{math|a1, a2, ...}} is a sequence of real numbers. When the limit of the sequence exists, the real number {{math|L}} is the limit of this sequence if and only if for every real number {{math|ε > 0}}, there exists a natural number {{math|N}} such that for all {{math|n > N}}, we have {{math|{{abs|anL}} < ε}}.{{Cite web|last=Weisstein|first=Eric W.|title=Limit|url=https://mathworld.wolfram.com/Limit.html|access-date=2020-08-18|website=mathworld.wolfram.com|language=en|archive-date=2020-06-20|archive-url=https://web.archive.org/web/20200620203909/https://mathworld.wolfram.com/Limit.html|url-status=live}}

The common notation

\lim_{n \to \infty} a_n = L

is read as:

:"The limit of an as n approaches infinity equals L" or "The limit as n approaches infinity of an equals L".

The formal definition intuitively means that eventually, all elements of the sequence get arbitrarily close to the limit, since the absolute value {{math|{{abs|anL}}}} is the distance between {{math|an}} and {{math|L}}.

Not every sequence has a limit. A sequence with a limit is called convergent; otherwise it is called divergent. One can show that a convergent sequence has only one limit.

The limit of a sequence and the limit of a function are closely related. On one hand, the limit as {{mvar|n}} approaches infinity of a sequence {{math|{{mset|an}}}} is simply the limit at infinity of a function {{math|a(n)}}—defined on the natural numbers {{math|{{mset|n}}}}. On the other hand, if X is the domain of a function {{math|f(x)}} and if the limit as {{mvar|n}} approaches infinity of {{math|f(xn)}} is {{math|L}} for every arbitrary sequence of points {{math|{{mset|xn}}}} in Xx0 which converges to {{math|x0}}, then the limit of the function {{math|f(x)}} as {{math|x}} approaches {{math|x0}} is equal to {{math|L}}.{{harvtxt|Apostol|1974|pp=75–76}} One such sequence would be {{math|{{mset|x0 + 1/n}}}}.

== Infinity as a limit ==

There is also a notion of having a limit "tend to infinity", rather than to a finite value L. A sequence \{a_n\} is said to "tend to infinity" if, for each real number M > 0, known as the bound, there exists an integer N such that for each n > N,

a_n > M.

That is, for every possible bound, the sequence eventually exceeds the bound. This is often written \lim_{n\rightarrow \infty} a_n = \infty or simply a_n \rightarrow \infty.

It is possible for a sequence to be divergent, but not tend to infinity. Such sequences are called oscillatory. An example of an oscillatory sequence is a_n = (-1)^n.

There is a corresponding notion of tending to negative infinity, \lim_{n\rightarrow \infty} a_n = -\infty, defined by changing the inequality in the above definition to a_n < M, with M < 0.

A sequence \{a_n\} with \lim_{n\rightarrow \infty} |a_n| = \infty is called unbounded, a definition equally valid for sequences in the complex numbers, or in any metric space. Sequences which do not tend to infinity are called bounded. Sequences which do not tend to positive infinity are called bounded above, while those which do not tend to negative infinity are bounded below.

== Metric space ==

The discussion of sequences above is for sequences of real numbers. The notion of limits can be defined for sequences valued in more abstract spaces, such as metric spaces. If M is a metric space with distance function d, and \{a_n\}_{n \geq 0} is a sequence in M, then the limit (when it exists) of the sequence is an element a\in M such that, given \varepsilon > 0, there exists an N such that for each n > N, we have

d(a, a_n) < \varepsilon.

An equivalent statement is that a_n \rightarrow a if the sequence of real numbers d(a, a_n) \rightarrow 0.

=== Example: R<sup>n</sup> ===

An important example is the space of n-dimensional real vectors, with elements \mathbf{x} = (x_1, \cdots, x_n) where each of the x_i are real, an example of a suitable distance function is the Euclidean distance, defined by

d(\mathbf{x}, \mathbf{y}) = \|\mathbf{x} - \mathbf{y}\| = \sqrt{\sum_i(x_i - y_i)^2}.

The sequence of points \{\mathbf{x}_n\}_{n \geq 0} converges to \mathbf{x} if the limit exists and \|\mathbf{x}_n - \mathbf{x}\| \rightarrow 0.

== Topological space ==

In some sense the most abstract space in which limits can be defined are topological spaces. If X is a topological space with topology \tau, and \{a_n\}_{n \geq 0} is a sequence in X, then the limit (when it exists) of the sequence is a point a\in X such that, given a (open) neighborhood U\in \tau of a, there exists an N such that for every n > N,

a_n \in U

is satisfied. In this case, the limit (if it exists) may not be unique. However it must be unique if X is a Hausdorff space.

== Function space ==

This section deals with the idea of limits of sequences of functions, not to be confused with the idea of limits of functions, discussed below.

The field of functional analysis partly seeks to identify useful notions of convergence on function spaces. For example, consider the space of functions from a generic set E to \mathbb{R}. Given a sequence of functions \{f_n\}_{n > 0} such that each is a function f_n: E \rightarrow \mathbb{R}, suppose that there exists a function such that for each x \in E,

f_n(x) \rightarrow f(x) \text{ or equivalently } \lim_{n \rightarrow \infty}f_n(x) = f(x).

Then the sequence f_n is said to converge pointwise to f. However, such sequences can exhibit unexpected behavior. For example, it is possible to construct a sequence of continuous functions which has a discontinuous pointwise limit.

Another notion of convergence is uniform convergence. The uniform distance between two functions f,g: E \rightarrow \mathbb{R} is the maximum difference between the two functions as the argument x \in E is varied. That is,

d(f,g) = \max_{x \in E}|f(x) - g(x)|.

Then the sequence f_n is said to uniformly converge or have a uniform limit of f if f_n \rightarrow f with respect to this distance. The uniform limit has "nicer" properties than the pointwise limit. For example, the uniform limit of a sequence of continuous functions is continuous.

Many different notions of convergence can be defined on function spaces. This is sometimes dependent on the regularity of the space. Prominent examples of function spaces with some notion of convergence are Lp spaces and Sobolev space.

= In functions =

{{main|Limit of a function}}

File:Limit-at-infinity-graph.png is {{math|L}}. For any arbitrary distance {{mvar|ε}}, there must be a value {{math|S}} such that the function stays within {{math|L ± ε}} for all {{math|x > S}}.|300x300px]]Suppose {{math|f}} is a real-valued function and {{mvar|c}} is a real number. Intuitively speaking, the expression

: \lim_{x \to c}f(x) = L

means that {{math|f(x)}} can be made to be as close to {{math|L}} as desired, by making {{mvar|x}} sufficiently close to {{mvar|c}}.{{Cite web |last=Weisstein |first=Eric W. |title=Epsilon-Delta Definition |url=https://mathworld.wolfram.com/Epsilon-DeltaDefinition.html |access-date=2020-08-18 |website=mathworld.wolfram.com |language=en |archive-date=2020-06-25 |archive-url=https://web.archive.org/web/20200625125230/https://mathworld.wolfram.com/Epsilon-DeltaDefinition.html |url-status=live }} In that case, the above equation can be read as "the limit of {{math|f}} of {{mvar|x}}, as {{mvar|x}} approaches {{mvar|c}}, is {{math|L}}".

Formally, the definition of the "limit of f(x) as x approaches c" is given as follows. The limit is a real number L so that, given an arbitrary real number \varepsilon > 0 (thought of as the "error"), there is a \delta > 0 such that, for any x satisfying 0 < |x - c| < \delta, it holds that | f(x) - L | < \varepsilon. This is known as the (ε, δ)-definition of limit.

The inequality 0 < |x - c| is used to exclude c from the set of points under consideration, but some authors do not include this in their definition of limits, replacing 0 < |x - c| < \delta with simply |x - c| < \delta. This replacement is equivalent to additionally requiring that f be continuous at c.

It can be proven that there is an equivalent definition which makes manifest the connection between limits of sequences and limits of functions.{{cite web

|url=https://dec41.user.srcf.net/h/IA_L/analysis_i

|title=Analysis I (based on a course given by Timothy Gowers)

|last=Chua

|first=Dexter

|website=Notes from the Mathematical Tripos

}} The equivalent definition is given as follows. First observe that for every sequence \{x_n\} in the domain of f, there is an associated sequence \{f(x_n)\}, the image of the sequence under f. The limit is a real number L so that, for all sequences x_n \rightarrow c, the associated sequence f(x_n) \rightarrow L.

== One-sided limit ==

{{Main article | One-sided limit}}

It is possible to define the notion of having a "left-handed" limit ("from below"), and a notion of a "right-handed" limit ("from above"). These need not agree. An example is given by the positive indicator function, f: \mathbb{R} \rightarrow \mathbb{R}, defined such that f(x) = 0 if x \leq 0, and f(x) = 1 if x > 0. At x = 0, the function has a "left-handed limit" of 0, a "right-handed limit" of 1, and its limit does not exist. Symbolically, this can be stated as, for this example,

\lim_{x \to c^-}f(x) = 0, and \lim_{x \to c^+}f(x) = 1, and from this it can be deduced \lim_{x \to c}f(x) doesn't exist, because \lim_{x \to c^-}f(x) \neq \lim_{x \to c^+}f(x).

== Infinity in limits of functions ==

It is possible to define the notion of "tending to infinity" in the domain of f,

\lim_{x \rightarrow +\infty} f(x) = L.

This could be considered equivalent to the limit as a reciprocal tends to 0:

\lim_{x' \rightarrow 0^+} f(1/x') = L.

or it can be defined directly: the "limit of f as x tends to positive infinity" is defined as a value L such that, given any real \varepsilon > 0, there exists an M > 0 so that for all x > M, |f(x) - L| < \varepsilon. The definition for sequences is equivalent: As n \rightarrow +\infty, we have f(x_n) \rightarrow L.

In these expressions, the infinity is normally considered to be signed (+\infty or -\infty) and corresponds to a one-sided limit of the reciprocal. A two-sided infinite limit can be defined, but an author would explicitly write \pm\infty to be clear.

It is also possible to define the notion of "tending to infinity" in the value of f,

\lim_{x \rightarrow c} f(x) = \infty.

Again, this could be defined in terms of a reciprocal:

\lim_{x \rightarrow c} \frac{1}{f(x)} = 0.

Or a direct definition can be given as follows: given any real number M>0, there is a \delta > 0 so that for 0 < |x - c| < \delta, the absolute value of the function |f(x)| > M. A sequence can also have an infinite limit: as n \rightarrow \infty, the sequence f(x_n) \rightarrow \infty.

This direct definition is easier to extend to one-sided infinite limits. While mathematicians do talk about functions approaching limits "from above" or "from below", there is not a standard mathematical notation for this as there is for one-sided limits.

= Nonstandard analysis =

In non-standard analysis (which involves a hyperreal enlargement of the number system), the limit of a sequence (a_n) can be expressed as the standard part of the value a_H of the natural extension of the sequence at an infinite hypernatural index n=H. Thus,

: \lim_{n \to \infty} a_n = \operatorname{st}(a_H) .

Here, the standard part function "st" rounds off each finite hyperreal number to the nearest real number (the difference between them is infinitesimal). This formalizes the natural intuition that for "very large" values of the index, the terms in the sequence are "very close" to the limit value of the sequence. Conversely, the standard part of a hyperreal a=[a_n] represented in the ultrapower construction by a Cauchy sequence (a_n), is simply the limit of that sequence:

: \operatorname{st}(a)=\lim_{n \to \infty} a_n .

In this sense, taking the limit and taking the standard part are equivalent procedures.

= Limit sets =

== Limit set of a sequence ==

Let \{a_n\}_{n > 0} be a sequence in a topological space X. For concreteness, X can be thought of as \mathbb{R}, but the definitions hold more generally. The limit set is the set of points such that if there is a convergent subsequence \{a_{n_k}\}_{k >0} with a_{n_k}\rightarrow a, then a belongs to the limit set. In this context, such an a is sometimes called a limit point.

A use of this notion is to characterize the "long-term behavior" of oscillatory sequences. For example, consider the sequence a_n = (-1)^n. Starting from n=1, the first few terms of this sequence are -1, +1, -1, +1, \cdots. It can be checked that it is oscillatory, so has no limit, but has limit points \{-1, +1\}.

== Limit set of a trajectory ==

This notion is used in dynamical systems, to study limits of trajectories. Defining a trajectory to be a function \gamma: \mathbb{R} \rightarrow X, the point \gamma(t) is thought of as the "position" of the trajectory at "time" t. The limit set of a trajectory is defined as follows. To any sequence of increasing times \{t_n\}, there is an associated sequence of positions \{x_n\} = \{\gamma(t_n)\}. If x is the limit set of the sequence \{x_n\} for any sequence of increasing times, then x is a limit set of the trajectory.

Technically, this is the \omega-limit set. The corresponding limit set for sequences of decreasing time is called the \alpha-limit set.

An illustrative example is the circle trajectory: \gamma(t) = (\cos(t), \sin(t)). This has no unique limit, but for each \theta \in \mathbb{R}, the point (\cos(\theta), \sin(\theta)) is a limit point, given by the sequence of times t_n = \theta + 2\pi n. But the limit points need not be attained on the trajectory. The trajectory \gamma(t) = t/(1 + t)(\cos(t), \sin(t)) also has the unit circle as its limit set.

Uses

Limits are used to define a number of important concepts in analysis.

= Series =

{{Main article|series (mathematics)}}

A particular expression of interest which is formalized as the limit of a sequence is sums of infinite series. These are "infinite sums" of real numbers, generally written as

\sum_{n = 1}^\infty a_n.

This is defined through limits as follows: given a sequence of real numbers \{a_n\}, the sequence of partial sums is defined by

s_n = \sum_{i = 1}^n a_i.

If the limit of the sequence \{s_n\} exists, the value of the expression \sum_{n = 1}^\infty a_n is defined to be the limit. Otherwise, the series is said to be divergent.

A classic example is the Basel problem, where a_n = 1/n^2. Then

\sum_{n = 1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}.

However, while for sequences there is essentially a unique notion of convergence, for series there are different notions of convergence. This is due to the fact that the expression \sum_{n = 1}^\infty a_n does not discriminate between different orderings of the sequence \{a_n\}, while the convergence properties of the sequence of partial sums can depend on the ordering of the sequence.

A series which converges for all orderings is called unconditionally convergent. It can be proven to be equivalent to absolute convergence. This is defined as follows. A series is absolutely convergent if \sum_{n = 1}^\infty |a_n| is well defined. Furthermore, all possible orderings give the same value.

Otherwise, the series is conditionally convergent. A surprising result for conditionally convergent series is the Riemann series theorem: depending on the ordering, the partial sums can be made to converge to any real number, as well as \pm \infty.

== Power series ==

{{Main article|Power series}}

A useful application of the theory of sums of series is for power series. These are sums of series of the form

f(z) = \sum_{n = 0}^\infty c_n z^n.

Often z is thought of as a complex number, and a suitable notion of convergence of complex sequences is needed. The set of values of z\in \mathbb{C} for which the series sum converges is a circle, with its radius known as the radius of convergence.

= Continuity of a function at a point =

The definition of continuity at a point is given through limits.

The above definition of a limit is true even if f(c) \neq L. Indeed, the function {{math|f}} need not even be defined at {{mvar|c}}. However, if f(c) is defined and is equal to L, then the function is said to be continuous at the point c.

Equivalently, the function is continuous at c if f(x) \rightarrow f(c) as x \rightarrow c, or in terms of sequences, whenever x_n \rightarrow c, then f(x_n) \rightarrow f(c).

An example of a limit where f is not defined at c is given below.

Consider the function

f(x) = \frac{x^2 - 1}{x - 1}.

then {{math|f(1)}} is not defined (see Indeterminate form), yet as {{mvar|x}} moves arbitrarily close to 1, {{math|f(x)}} correspondingly approaches 2:{{Cite web |title=limit {{!}} Definition, Example, & Facts |url=https://www.britannica.com/science/limit-mathematics |access-date=2020-08-18 |website=Encyclopedia Britannica |language=en |archive-date=2021-05-09 |archive-url=https://web.archive.org/web/20210509211046/https://www.britannica.com/science/limit-mathematics |url-status=live }}

class="wikitable"

|{{math|f(0.9)}}

{{math|f(0.99)}}{{math|f(0.999)}}{{math|f(1.0)}}{{math|f(1.001)}}{{math|f(1.01)}}{{math|f(1.1)}}
{{math|1.900}}{{math|1.990}}{{math|1.999}}{{math|undefined}}{{math|2.001}}{{math|2.010}}{{math|2.100}}

Thus, {{math|f(x)}} can be made arbitrarily close to the limit of 2—just by making {{mvar|x}} sufficiently close to {{math|1}}.

In other words,

\lim_{x \to 1} \frac{x^2-1}{x-1} = 2.

This can also be calculated algebraically, as \frac{x^2-1}{x-1} = \frac{(x+1)(x-1)}{x-1} = x+1 for all real numbers {{math|x ≠ 1}}.

Now, since {{math|x + 1}} is continuous in {{mvar|x}} at 1, we can now plug in 1 for {{mvar|x}}, leading to the equation

\lim_{x \to 1} \frac{x^2-1}{x-1} = 1+1 = 2.

In addition to limits at finite values, functions can also have limits at infinity. For example, consider the function

f(x) = \frac{2x-1}{x}

where:

  • {{math|1=f(100) = 1.9900}}
  • {{math|1=f(1000) = 1.9990}}
  • {{math|1=f(10000) = 1.9999}}

As {{mvar|x}} becomes extremely large, the value of {{math|f(x)}} approaches {{math|2}}, and the value of {{math|f(x)}} can be made as close to {{math|2}} as one could wish—by making {{mvar|x}} sufficiently large. So in this case, the limit of {{math|f(x)}} as {{mvar|x}} approaches infinity is {{math|2}}, or in mathematical notation,\lim_{x\to\infty}\frac{2x-1}{x} = 2.

= Continuous functions =

An important class of functions when considering limits are continuous functions. These are precisely those functions which preserve limits, in the sense that if f is a continuous function, then whenever a_n \rightarrow a in the domain of f, then the limit f(a_n) exists and furthermore is f(a).

In the most general setting of topological spaces, a short proof is given below:

Let f: X\rightarrow Y be a continuous function between topological spaces X and Y. By definition, for each open set V in Y, the preimage f^{-1}(V) is open in X.

Now suppose a_n \rightarrow a is a sequence with limit a in X. Then f(a_n) is a sequence in Y, and f(a) is some point.

Choose a neighborhood V of f(a). Then f^{-1}(V) is an open set (by continuity of f) which in particular contains a, and therefore f^{-1}(V) is a neighborhood of a. By the convergence of a_n to a, there exists an N such that for n > N, we have a_n \in f^{-1}(V).

Then applying f to both sides gives that, for the same N, for each n > N we have f(a_n) \in V. Originally V was an arbitrary neighborhood of f(a), so f(a_n) \rightarrow f(a). This concludes the proof.

In real analysis, for the more concrete case of real-valued functions defined on a subset E \subset \mathbb{R}, that is, f: E \rightarrow \mathbb{R}, a continuous function may also be defined as a function which is continuous at every point of its domain.

= Limit points =

In topology, limits are used to define limit points of a subset of a topological space, which in turn give a useful characterization of closed sets.

In a topological space X, consider a subset S. A point a is called a limit point if there is a sequence \{a_n\} in S\backslash\{a\} such that a_n \rightarrow a.

The reason why \{a_n\} is defined to be in S\backslash\{a\} rather than just S is illustrated by the following example. Take X = \mathbb{R} and S = [0,1] \cup \{2\}. Then 2 \in S, and therefore is the limit of the constant sequence 2, 2, \cdots. But 2 is not a limit point of S.

A closed set, which is defined to be the complement of an open set, is equivalently any set C which contains all its limit points.

= Derivative =

{{Main article|derivative}}

The derivative is defined formally as a limit. In the scope of real analysis, the derivative is first defined for real functions f defined on a subset E \subset \mathbb{R}. The derivative at x \in E is defined as follows. If the limit of

\frac{f(x+h) - f(x)}{h}

as h \rightarrow 0 exists, then the derivative at x is this limit.

Equivalently, it is the limit as y \rightarrow x of

\frac{f(y) - f(x)}{y-x}.

If the derivative exists, it is commonly denoted by f'(x).

Properties

= Sequences of real numbers =

For sequences of real numbers, a number of properties can be proven. Suppose \{a_n\} and \{b_n\} are two sequences converging to a and b respectively.

  • Sum of limits is equal to limit of sum

a_n + b_n \rightarrow a + b.

  • Product of limits is equal to limit of product

a_n \cdot b_n \rightarrow a \cdot b.

  • Inverse of limit is equal to limit of inverse (as long as a \neq 0)

\frac{1}{a_n} \rightarrow \frac{1}{a}.

Equivalently, the function f(x) = 1/x is continuous about nonzero x.

== Cauchy sequences ==

{{See also | Cauchy sequence}}

A property of convergent sequences of real numbers is that they are Cauchy sequences. The definition of a Cauchy sequence \{a_n\} is that for every real number \varepsilon > 0, there is an N such that whenever m, n > N,

|a_m - a_n| < \varepsilon.

Informally, for any arbitrarily small error \varepsilon, it is possible to find an interval of diameter \varepsilon such that eventually the sequence is contained within the interval.

Cauchy sequences are closely related to convergent sequences. In fact, for sequences of real numbers they are equivalent: any Cauchy sequence is convergent.

In general metric spaces, it continues to hold that convergent sequences are also Cauchy. But the converse is not true: not every Cauchy sequence is convergent in a general metric space. A classic counterexample is the rational numbers, \mathbb{Q}, with the usual distance. The sequence of decimal approximations to \sqrt{2}, truncated at the nth decimal place is a Cauchy sequence, but does not converge in \mathbb{Q}.

A metric space in which every Cauchy sequence is also convergent, that is, Cauchy sequences are equivalent to convergent sequences, is known as a complete metric space.

One reason Cauchy sequences can be "easier to work with" than convergent sequences is that they are a property of the sequence \{a_n\} alone, while convergent sequences require not just the sequence \{a_n\} but also the limit of the sequence a.

= Order of convergence =

Beyond whether or not a sequence \{a_n\} converges to a limit a, it is possible to describe how fast a sequence converges to a limit. One way to quantify this is using the order of convergence of a sequence.

A formal definition of order of convergence can be stated as follows. Suppose \{a_n\}_{n > 0} is a sequence of real numbers which is convergent with limit a. Furthermore, a_n \neq a for all n. If positive constants \lambda and \alpha exist such that

\lim_{n \to \infty } \frac{ \left| a_{n+1} - a \right| }{ \left| a_n - a \right| ^\alpha } = \lambda

then a_n is said to converge to a with order of convergence \alpha . The constant \lambda is known as the asymptotic error constant.

Order of convergence is used for example the field of numerical analysis, in error analysis.

= Computability =

Limits can be difficult to compute. There exist limit expressions whose modulus of convergence is undecidable. In recursion theory, the limit lemma proves that it is possible to encode undecidable problems using limits.{{Cite book |last=Soare |first=Robert I. |url=https://www.worldcat.org/oclc/1154894968 |title=Recursively enumerable sets and degrees : a study of computable functions and computably generated sets |date=2014 |publisher=Springer-Verlag |isbn=978-3-540-66681-3 |location=Berlin |oclc=1154894968}}

There are several theorems or tests that indicate whether the limit exists. These are known as convergence tests. Examples include the ratio test and the squeeze theorem. However they may not tell how to compute the limit.

See also

Notes

{{reflist}}

References

  • {{ citation | last1 = Apostol | first1 = Tom M. | year = 1974 | lccn = 72011473 | title = Mathematical Analysis | edition = 2nd | publisher = Addison-Wesley | location = Menlo Park }}