alternating series
{{Short description|Infinite series whose terms alternate in sign}}
{{More citations needed|date=January 2010}}
{{Calculus |Series}}
In mathematics, an alternating series is an infinite series of terms that alternate between positive and negative signs. In capital-sigma notation this is expressed
or
with {{math|an > 0}} for all {{mvar|n}}.
Like any series, an alternating series is a convergent series if and only if the sequence of partial sums of the series converges to a limit. The alternating series test guarantees that an alternating series is convergent if the terms {{math|an}} converge to 0 monotonically, but this condition is not necessary for convergence.
Examples
The geometric series 1/2 − 1/4 + 1/8 − 1/16 + ⋯ sums to {{sfrac|1|3}}.
The alternating harmonic series has a finite sum but the harmonic series does not. The series
converges to , but is not absolutely convergent.
The Mercator series provides an analytic power series expression of the natural logarithm, given by
x\ne-1.
The functions sine and cosine used in trigonometry and introduced in elementary algebra as the ratio of sides of a right triangle can also be defined as alternating series in calculus.
and
When the alternating factor {{math|(–1)n}} is removed from these series one obtains the hyperbolic functions sinh and cosh used in calculus and statistics.
For integer or positive index α the Bessel function of the first kind may be defined with the alternating series
where {{math|Γ(z)}} is the gamma function.
If {{mvar|s}} is a complex number, the Dirichlet eta function is formed as an alternating series
that is used in analytic number theory.
Alternating series test
{{main|Alternating series test}}
The theorem known as the "Leibniz Test" or the alternating series test states that an alternating series will converge if the terms {{math|an}} converge to 0 monotonically.
Proof: Suppose the sequence converges to zero and is monotone decreasing. If is odd and
S_n - S_m & =
\sum_{k=0}^n(-1)^k\,a_k\,-\,\sum_{k=0}^m\,(-1)^k\,a_k\ = \sum_{k=m+1}^n\,(-1)^k\,a_k \\
& =a_{m+1} - a_{m+2} + a_{m+3} - a_{m+4} + \cdots + a_n\\
& = a_{m+1}-(a_{m+2}-a_{m+3}) - (a_{m+4}-a_{m+5}) - \cdots - a_n \le a_{m+1} \le a_{m}.
\end{align}
Since
Approximating sums
The estimate above does not depend on
Absolute convergence
A series
Theorem: Absolutely convergent series are convergent.
Proof: Suppose
Conditional convergence
A series is conditionally convergent if it converges but does not converge absolutely.
For example, the harmonic series
diverges, while the alternating version
converges by the alternating series test.
Rearrangements
For any series, we can create a new series by rearranging the order of summation. A series is unconditionally convergent if any rearrangement creates a series with the same convergence as the original series. Absolutely convergent series are unconditionally convergent. But the Riemann series theorem states that conditionally convergent series can be rearranged to create arbitrary convergence.{{cite journal |last1=Mallik |first1=AK |year=2007 |title=Curious Consequences of Simple Sequences |journal=Resonance |volume=12 |issue=1 |pages=23–37 |doi=10.1007/s12045-007-0004-7|s2cid=122327461 }} Agnew's theorem describes rearrangements that preserve convergence for all convergent series. The general principle is that addition of infinite sums is only commutative for absolutely convergent series.
For example, one false proof that 1=0 exploits the failure of associativity for infinite sums.
As another example, by Mercator series
But, since the series does not converge absolutely, we can rearrange the terms to obtain a series for
& {} \quad \left(1-\frac{1}{2}\right)-\frac{1}{4} +\left(\frac{1}{3}-\frac{1}{6}\right) -\frac{1}{8}+\left(\frac{1}{5} -\frac{1}{10}\right)-\frac{1}{12}+\cdots \\[8pt]
& = \frac{1}{2}-\frac{1}{4}+\frac{1}{6} -\frac{1}{8}+\frac{1}{10}-\frac{1}{12} +\cdots \\[8pt]
& = \frac{1}{2}\left(1-\frac{1}{2} + \frac{1}{3} -\frac{1}{4}+\frac{1}{5}- \frac{1}{6}+ \cdots\right)= \frac{1}{2} \ln(2).
\end{align}
Series acceleration
In practice, the numerical summation of an alternating series may be sped up using any one of a variety of series acceleration techniques. One of the oldest techniques is that of Euler summation, and there are many modern techniques that can offer even more rapid convergence.
See also
Notes
{{reflist}}
References
- Earl D. Rainville (1967) Infinite Series, pp 73–6, Macmillan Publishers.
- {{MathWorld|title=Alternating Series|urlname=AlternatingSeries}}
{{series (mathematics)}}
{{DEFAULTSORT:Alternating Series}}