Bisection method

{{short description|Algorithm for finding a zero of a function}}

{{about|searching zeros of continuous functions|searching a finite sorted array|binary search algorithm|the method of determining what software change caused a change in behavior|Bisection (software engineering)}}

{{CS1 config|mode=cs1}}

Image:Bisection method.svg

In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods.{{Harvnb|Burden|Faires|2014|p=51}} The method is also called the interval halving method,{{cite web |url=http://siber.cankaya.edu.tr/NumericalComputations/ceng375/node32.html |title=Interval Halving (Bisection) |access-date=2013-11-07 |url-status=dead |archive-url=https://web.archive.org/web/20130519092250/http://siber.cankaya.edu.tr/NumericalComputations/ceng375/node32.html |archive-date=2013-05-19 }} the binary search method,{{Harvnb|Burden|Faires|2014|p=28}} or the dichotomy method.{{Cite web|title = Dichotomy method - Encyclopedia of Mathematics|url = https://www.encyclopediaofmath.org/index.php/Dichotomy_method|website = www.encyclopediaofmath.org|access-date = 2015-12-21}}

For polynomials, more elaborate methods exist for testing the existence of a root in an interval (Descartes' rule of signs, Sturm's theorem, Budan's theorem). They allow extending the bisection method into efficient algorithms for finding all real roots of a polynomial; see Real-root isolation.

The method

The method is applicable for numerically solving the equation f(x)=0 for the real variable x, where f is a continuous function defined on an interval [a,b] and where f(a) and f(b) have opposite signs. In this case a and b are said to bracket a root since, by the intermediate value theorem, the continuous function f must have at least one root in the interval (a,b).

At each step the method divides the interval in two parts/halves by computing the midpoint c = (a+b)/2 of the interval and the value of the function f(c) at that point. If c itself is a root then the process has succeeded and stops. Otherwise, there are now only two possibilities: either f(a) and f(c) have opposite signs and bracket a root, or f(c) and f(b) have opposite signs and bracket a root.If the function has the same sign at the endpoints of an interval, the endpoints may or may not bracket roots of the function. The method selects the subinterval that is guaranteed to be a bracket as the new interval to be used in the next step. In this way an interval that contains a zero of f is reduced in width by 50% at each step. The process is continued until the interval is sufficiently small.

Explicitly, if f(c) = 0 then c may be taken as the solution and the process stops. Otherwise, if f(a) and f(c) have opposite signs, then the method sets c as the new value for b, and if f(b) and f(c) have opposite signs then the method sets c as the new a. In both cases, the new f(a) and f(b) have opposite signs, so the method is applicable to this smaller interval.{{Harvnb|Burden|Faires|2014|p=28}} for section

= Stopping condition =

The input for the method is a continuous function f, an interval [a,b], and the function values f(a) and f(b). The function values are of opposite sign (there is at least one zero crossing within the interval). Each iteration performs these steps:

  1. Calculate c, the midpoint of the interval, :\qquad c =

\begin{cases}

\tfrac{a+b}{2}, & \text{if }a\times b \leq 0 \\\,a+\tfrac{b-a}{2}, & \text{if }a\times b> 0

\end{cases}

  1. Calculate the function value at the midpoint, f(c).
  2. If convergence is satisfactory (see below), return c and stop iterating.
  3. Examine the sign of f(c) and replace either (a, f(a)) or (b, f(b)) with (c, f(c)) so that there is a zero crossing within the new interval.

In order to determine when the iteration should stop, it is necessary to consider what is meant by the concept of 'tolerance' (\epsilon).

Burden & Faires{{Harvnb|Burden|Faires|2014|p=50}} state:

"we can select a tolerance \epsilon > 0 and generate c1, ..., cN until one of the following conditions is met:

scope=col style="width: 250px;" |

! scope=col style="width: 100px;" |

! scope=col style="width: 125px;" |

style="text-align: right;"

||c_N-c_{N-1}|<\epsilon,

(2.1)
style="text-align: right;"

| \left|\frac{c_N-c_{N-1}}{c_N}\right|<\epsilon,

c_N\ne 0, or(2.2)
style="text-align: right;"

| |f(c_N)|<\epsilon.

(2.3)

Unfortunately, difficulties can arise using any of these stopping criteria ... Without additional knowledge about f or c, inequality (2.2) is the best stopping criterion to apply because it comes closest to testing relative error."

(Note: c has been used here as it is more common than Burden and Faire's 'p'.)

The objective is to find an approximation, within the tolerance, to the root.

It can be seen that (2.3) |f(c_N)|<\epsilon does not give such an approximation unless the slope of the function at c_N is in the neighborhood of \pm 1.

Suppose, for the purpose of illustration, the tolerance \epsilon= 5\times10^{-7}.

Then, for a function such as f(x)=10^{-m}*(x - 1),

|f(c)| = 10^{-m}|x - 1| < 5\times10^{-7}

so

|x - 1|<5\times10^{m-7}

This means that any number {{mvar|x}} in

[1-5\times10^{m-7}, 1+ 5\times 10^{m-7}]

would be a 'good' approximation to the root.

If m = 10,

the approximation to the root 1 would be in

[1-5000, 1+ 5000] = [-4999, 5001].

-- a very poor result.

As (2.3) does not appear to give acceptable results, (2.1) and (2.2) need to be evaluated.

The following Python script compares the behavior for those two stopping conditions.

def bisect(f, a, b, tolerance):

fa = f(a)

fb = f(b)

i = 0

stop_a = []

stop_r = []

while True:

i += 1

c = a + (b - a) / 2

fc = f(c)

if c < 10: # For small root

if not stop_a:

print('{:3d} {:18.16f} {:18.16f} {:18.16e} | {:5.2e} {:5.2e}'

.format(i, a, b, c, b - a, (b - a) / c))

else: # large root

print('{:3d} {:18.16f} {:18.16f} {:18.16e} | ----- {:5.2e}'

.format(i, a, b, c, b - a))

else:

if not stop_r:

print('{:3d} {:18.7f} {:18.7f} {:18.7e} | {:5.2e} {:5.2e}'

.format(i, a, b, c, b - a, (b - a) / c))

else:

print('{:3d} {:18.7f} {:18.7f} {:18.7e} | {:5.2e} ----- '

.format(i, a, b, c, b - a))

if fc == 0:

return [c, i]

if (b - a <= abs(c) * tolerance) & (stop_r == []):

stop_r = [c, i]

if (b - a <= tolerance) & (stop_a == []):

stop_a = [c, i]

if np.sign(fa) == np.sign(fc):

a = c

fa = fc

else:

b = c

fb = fc

if (stop_r != []) & (stop_a != []):

return [stop_a, stop_r]

The first function to be tested is one with a small root i.e. f(x) = x - 0.00000000123456789

print(' i a b c b - a (b - a)/c')

f = lambda x: x - 0.00000000123456789

res = bisect(f, 0, 1, 5e-7)

print('In {:2d} steps the absolute error case gives {:20.18F}'.format(res[0][1], res[0][0]))

print('In {:2d} steps the relative error case gives {:20.18F}'.format(res[1][1], res[1][0]))

print(' as the approximation to 0.00000000123456789')

 i          a                   b                  c               b - a    (b - a)/c

1 0.0000000000000000 1.0000000000000000 5.0000000000000000e-01 | 1.00e+00 2.00e+00

2 0.0000000000000000 0.5000000000000000 2.5000000000000000e-01 | 5.00e-01 2.00e+00

3 0.0000000000000000 0.2500000000000000 1.2500000000000000e-01 | 2.50e-01 2.00e+00

4 0.0000000000000000 0.1250000000000000 6.2500000000000000e-02 | 1.25e-01 2.00e+00

5 0.0000000000000000 0.0625000000000000 3.1250000000000000e-02 | 6.25e-02 2.00e+00

6 0.0000000000000000 0.0312500000000000 1.5625000000000000e-02 | 3.12e-02 2.00e+00

7 0.0000000000000000 0.0156250000000000 7.8125000000000000e-03 | 1.56e-02 2.00e+00

8 0.0000000000000000 0.0078125000000000 3.9062500000000000e-03 | 7.81e-03 2.00e+00

9 0.0000000000000000 0.0039062500000000 1.9531250000000000e-03 | 3.91e-03 2.00e+00

10 0.0000000000000000 0.0019531250000000 9.7656250000000000e-04 | 1.95e-03 2.00e+00

11 0.0000000000000000 0.0009765625000000 4.8828125000000000e-04 | 9.77e-04 2.00e+00

12 0.0000000000000000 0.0004882812500000 2.4414062500000000e-04 | 4.88e-04 2.00e+00

13 0.0000000000000000 0.0002441406250000 1.2207031250000000e-04 | 2.44e-04 2.00e+00

14 0.0000000000000000 0.0001220703125000 6.1035156250000000e-05 | 1.22e-04 2.00e+00

15 0.0000000000000000 0.0000610351562500 3.0517578125000000e-05 | 6.10e-05 2.00e+00

16 0.0000000000000000 0.0000305175781250 1.5258789062500000e-05 | 3.05e-05 2.00e+00

17 0.0000000000000000 0.0000152587890625 7.6293945312500000e-06 | 1.53e-05 2.00e+00

18 0.0000000000000000 0.0000076293945312 3.8146972656250000e-06 | 7.63e-06 2.00e+00

19 0.0000000000000000 0.0000038146972656 1.9073486328125000e-06 | 3.81e-06 2.00e+00

20 0.0000000000000000 0.0000019073486328 9.5367431640625000e-07 | 1.91e-06 2.00e+00

21 0.0000000000000000 0.0000009536743164 4.7683715820312500e-07 | 9.54e-07 2.00e+00

22 0.0000000000000000 0.0000004768371582 2.3841857910156250e-07 | 4.77e-07 2.00e+00

23 0.0000000000000000 0.0000002384185791 1.1920928955078125e-07 | ----- 2.38e-07

24 0.0000000000000000 0.0000001192092896 5.9604644775390625e-08 | ----- 1.19e-07

25 0.0000000000000000 0.0000000596046448 2.9802322387695312e-08 | ----- 5.96e-08

26 0.0000000000000000 0.0000000298023224 1.4901161193847656e-08 | ----- 2.98e-08

27 0.0000000000000000 0.0000000149011612 7.4505805969238281e-09 | ----- 1.49e-08

28 0.0000000000000000 0.0000000074505806 3.7252902984619141e-09 | ----- 7.45e-09

29 0.0000000000000000 0.0000000037252903 1.8626451492309570e-09 | ----- 3.73e-09

30 0.0000000000000000 0.0000000018626451 9.3132257461547852e-10 | ----- 1.86e-09

31 0.0000000009313226 0.0000000018626451 1.3969838619232178e-09 | ----- 9.31e-10

32 0.0000000009313226 0.0000000013969839 1.1641532182693481e-09 | ----- 4.66e-10

33 0.0000000011641532 0.0000000013969839 1.2805685400962830e-09 | ----- 2.33e-10

34 0.0000000011641532 0.0000000012805685 1.2223608791828156e-09 | ----- 1.16e-10

35 0.0000000012223609 0.0000000012805685 1.2514647096395493e-09 | ----- 5.82e-11

36 0.0000000012223609 0.0000000012514647 1.2369127944111824e-09 | ----- 2.91e-11

37 0.0000000012223609 0.0000000012369128 1.2296368367969990e-09 | ----- 1.46e-11

38 0.0000000012296368 0.0000000012369128 1.2332748156040907e-09 | ----- 7.28e-12

39 0.0000000012332748 0.0000000012369128 1.2350938050076365e-09 | ----- 3.64e-12

40 0.0000000012332748 0.0000000012350938 1.2341843103058636e-09 | ----- 1.82e-12

41 0.0000000012341843 0.0000000012350938 1.2346390576567501e-09 | ----- 9.09e-13

42 0.0000000012341843 0.0000000012346391 1.2344116839813069e-09 | ----- 4.55e-13

43 0.0000000012344117 0.0000000012346391 1.2345253708190285e-09 | ----- 2.27e-13

44 0.0000000012345254 0.0000000012346391 1.2345822142378893e-09 | ----- 1.14e-13

45 0.0000000012345254 0.0000000012345822 1.2345537925284589e-09 | ----- 5.68e-14

46 0.0000000012345538 0.0000000012345822 1.2345680033831741e-09 | ----- 2.84e-14

47 0.0000000012345538 0.0000000012345680 1.2345608979558165e-09 | ----- 1.42e-14

48 0.0000000012345609 0.0000000012345680 1.2345644506694953e-09 | ----- 7.11e-15

49 0.0000000012345645 0.0000000012345680 1.2345662270263347e-09 | ----- 3.55e-15

50 0.0000000012345662 0.0000000012345680 1.2345671152047544e-09 | ----- 1.78e-15

51 0.0000000012345671 0.0000000012345680 1.2345675592939642e-09 | ----- 8.88e-16

52 0.0000000012345676 0.0000000012345680 1.2345677813385691e-09 | ----- 4.44e-16

In 22 steps the absolute error case gives 0.000000238418579102

In 52 steps the relative error case gives 0.000000001234567781

as the approximation to 0.00000000123456789

The reason that the absolute difference method gives such a poor result is that it measures decimal places of accuracy - but those decimal places may contain only 0's so have no useful information.

That means that the 6 zeros after the decimal point in 0.000000238418579102 match the first 6 in 0.00000000123456789 so the absolute difference is less than \epsilon= 5\times10^{-7}.

On the other hand, the relative difference method measures significant digits and represents a much better approximation to the position of the root.

The next example is

print(' i           a                  b                    c          b - a   (b - a)/c')

res = bisect(fun, 1234550, 1234581, 5e-7)

print('In %2d steps the absolute error case gives %20.18F' % (res[0][1], res[0][0]))

print('In %2d steps the relative error case gives %20.18F' % (res[1][1], res[1][0]))

print(' as the approximation to 1234567.89012456789')

i a b c b - a (b - a)/c

1 1234550.0000000 1234581.0000000 1.2345655e+06 | 3.10e+01 2.51e-05

2 1234565.5000000 1234581.0000000 1.2345732e+06 | 1.55e+01 1.26e-05

3 1234565.5000000 1234573.2500000 1.2345694e+06 | 7.75e+00 6.28e-06

4 1234565.5000000 1234569.3750000 1.2345674e+06 | 3.88e+00 3.14e-06

5 1234567.4375000 1234569.3750000 1.2345684e+06 | 1.94e+00 1.57e-06

6 1234567.4375000 1234568.4062500 1.2345679e+06 | 9.69e-01 7.85e-07

7 1234567.4375000 1234567.9218750 1.2345677e+06 | 4.84e-01 3.92e-07

8 1234567.6796875 1234567.9218750 1.2345678e+06 | 2.42e-01 -----

9 1234567.8007812 1234567.9218750 1.2345679e+06 | 1.21e-01 -----

10 1234567.8613281 1234567.9218750 1.2345679e+06 | 6.05e-02 -----

11 1234567.8613281 1234567.8916016 1.2345679e+06 | 3.03e-02 -----

12 1234567.8764648 1234567.8916016 1.2345679e+06 | 1.51e-02 -----

13 1234567.8840332 1234567.8916016 1.2345679e+06 | 7.57e-03 -----

14 1234567.8878174 1234567.8916016 1.2345679e+06 | 3.78e-03 -----

15 1234567.8897095 1234567.8916016 1.2345679e+06 | 1.89e-03 -----

16 1234567.8897095 1234567.8906555 1.2345679e+06 | 9.46e-04 -----

17 1234567.8897095 1234567.8901825 1.2345679e+06 | 4.73e-04 -----

18 1234567.8899460 1234567.8901825 1.2345679e+06 | 2.37e-04 -----

19 1234567.8900642 1234567.8901825 1.2345679e+06 | 1.18e-04 -----

20 1234567.8901234 1234567.8901825 1.2345679e+06 | 5.91e-05 -----

21 1234567.8901234 1234567.8901529 1.2345679e+06 | 2.96e-05 -----

22 1234567.8901234 1234567.8901381 1.2345679e+06 | 1.48e-05 -----

23 1234567.8901234 1234567.8901308 1.2345679e+06 | 7.39e-06 -----

24 1234567.8901234 1234567.8901271 1.2345679e+06 | 3.70e-06 -----

25 1234567.8901234 1234567.8901252 1.2345679e+06 | 1.85e-06 -----

26 1234567.8901243 1234567.8901252 1.2345679e+06 | 9.24e-07 -----

27 1234567.8901243 1234567.8901248 1.2345679e+06 | 4.62e-07 -----

In 27 steps the absolute error case gives 1234567.890124522149562836

In 7 steps the relative error case gives 1234567.679687500000000000

as the approximation to 1234567.89012456789

In this case, the absolute difference tries to get 6 decimal places even though there are 7 digits before the decimal point. The relative difference gives 7 significant digits - all before the decimal point.

These two examples show that the relative difference method produces much more satisfactory results than does the absolute difference method.

A common idea used in algorithms for the bisection method is to do a computation to predetermine the number of steps required to achieve a desired accuracy.

This is done by noting that, after n bisections, the maximum difference between the root and the approximation is

:|c_n-c|\le\frac

b-a
{2^n} < \epsilon.

This formula has been used to determine, in advance, an upper bound on the number of iterations that the bisection method needs to converge to a root within a certain number of decimal places.

The number n of iterations needed to achieve such a required tolerance ε is bounded by

:n \le \left\lceil\log_2\left(\frac{b-a}{\epsilon}\right)\right\rceil

The problem is that the number of iterations is determined by using the absolute difference method and hence should not be applied.

An alternative approach has been suggested by MIT:

http://web.mit.edu/10.001/Web/Tips/Converge.htm

Convergence Tests, RTOL and ATOL

Tolerances are usually specified as either a relative tolerance RTOL or an absolute tolerance ATOL, or both. The user typically desires that

| True value -- Computed value | < RTOL*|True Value| + ATOL (Eq.1)

where the RTOL controls the number of significant figures in the computed value (a float or a double), and a small ATOL is a just a "safety net" for the case where True Value is close to zero. (What would happen if ATOL = 0 and True Value = 0? Would the convergence test ever be satisfied?) You should write your programs to take both RTOL and ATOL as inputs."

If the 'True Value' is large, then the 'RTOL' term will control the error so this would help in that case.

If the 'True Value' is small, then the error will be controlled by ATOL - this will make things worse.

The question is asked "(What would happen if ATOL = 0 and True Value = 0?. Would the convergence test ever be satisfied?)"- but no attempt is made to answer it.

The answer to this question will follow.

IEEE Standard-754 for Computer Arithmetic

If the algorithm is being used in the real number system, it is possible to continue the bisection until the relative error produces the desired approximation.

If the algorithm is used with computer arithmetic, a further problem arises.

In order to improve reliably and portably, the Institute of Electrical and Electronics Engineers (IEEE) produced a standard for floating point arithmetic in 1985 and has revised it in 2008 and 2019; see IEEE 754. {{Cite book |title=IEEE Standard for Floating-Point Arithmetic |series=IEEE STD 754-2019 |pages=1–84 |author=IEEE Computer Society |date=22 July 2019 |publisher=IEEE |id=IEEE Std 754-2019 |doi=10.1109/IEEESTD.2019.8766229 |isbn=978-1-5044-5924-2 |ref=CITEREFIEEE_7542019}} The IEEE Standard 754 representation is the standard used in most micro-computers. It is, for example, the basis of the PC floating point processor.

Double-precision numbers occupy 64 bits which are divided into a sign bit (+/-), an exponent of 10 bits, and a fractional part of 53 bits.

In order to allow for fractions (negative exponents), the exponent is biased to make the effective number of bits for the exponent 9. The effective values of the exponent with {{math|0 < e ≤ 1023}} would be (2^{-511}, 2^{512}) making the double precision numbers take the form

(-1)^{s} 2^{e-511} 0.f

The extreme range for a positive DP number would then be

(1.492 \times 10^{-154}, 1.341\times 10^{154})

Because the fraction would normally have a non-zero leading digit (a 1 for binary) that bit does not need to be stored as the processor will supply it. As a result, the 53 bit fraction can be stored in 52 bits so the other bit can be used in the exponent to give an actual range of 0 < e ≤ 2047.

The range can be further extended by putting the assumed 1 before the binary point.

If both the exponent and fraction are 0, then the number is 0 (with a sign).

In order to deal with 3 other extreme situations, an exponent of 2047 is reserved for NaN (Not a Number - such as division by 0) and the infinities.

A number is thus stored in the following form:

class="wikitable"
style="background-color: lightblue"| .

|style="background-color: pink"|

|style="background-color: pink"|

|style="background-color: pink"|

|style="background-color: pink"|

|style="background-color: pink"|

|style="background-color: pink"|

|style="background-color: pink"|

|style="background-color: pink"|

|style="background-color: pink"|

|style="background-color: pink"|

|style="background-color: pink"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

|style="background-color: lightgreen"|

style="text-align: center;background-color:|s

|style="text-align: center;background-color: pink" colspan="11" |e

|style="text-align: center;background-color: lightgreen" colspan="53" |f

style="text-align: center;background-color:|63

|style="text-align: right;background-color: pink" colspan="11" |52

|style="text-align: right;background-color: lightgreen" colspan="53" |0

The following are examples of some double precision numbers:

class="wikitable"
+ Double Precision
style="text-align: center;" colspan="20" |Decimal 3
0

|style="background-color: pink"|100

|style="background-color: pink"|0000

|style="background-color: pink"|0000

|style="background-color: lightgreen"|1000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

style="text-align: center;background-color: pink" colspan="2" |4

|style="text-align: center;background-color: pink" |0

|style="text-align: center;background-color: pink" |0

|style="text-align: center;background-color: lightgreen" |8

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

style="text-align: center;" colspan="20" |Positive infinity (+\infty)
0

|style="background-color: pink"|111

|style="background-color: pink"|1111

|style="background-color: pink"|1111

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

style="text-align: center;background-color: pink" colspan="2" |7

|style="text-align: center;background-color: pink" |F

|style="text-align: center;background-color: pink" |F

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

style="text-align: center;" colspan="20" |Max. double 1.7976931348623157 × 10^{308}
0

|style="background-color: pink"|111

|style="background-color: pink"|1111

|style="background-color: pink"|1110

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

style="text-align: center;background-color: pink" colspan="2" |7

|style="text-align: center;background-color: pink" |F

|style="text-align: center;background-color: pink" |E

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

style="text-align: center;" colspan="20" |Min. normal 2.2250738585072014 × 110^{-308}
0

|style="background-color: pink"|000

|style="background-color: pink"|0000

|style="background-color: pink"|0001

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

style="text-align: center;background-color: pink" colspan="2" |0

|style="text-align: center;background-color: pink" |0

|style="text-align: center;background-color: pink" |1

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

style="text-align: center;" colspan="20" |Max. subnormal 2.2250738585072009 × 10^{-308}
0

|style="background-color: pink"|000

|style="background-color: pink"|0000

|style="background-color: pink"|0000

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

|style="background-color: lightgreen"|1111

style="text-align: center;background-color: pink" colspan="2" |0

|style="text-align: center;background-color: pink" |0

|style="text-align: center;background-color: pink" |0

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

|style="text-align: center;background-color: lightgreen" |F

style="text-align: center;" colspan="20" |Min. subnormal 4.9406564584124654 × 10^{-324}
0

|style="background-color: pink"|000

|style="background-color: pink"|0000

|style="background-color: pink"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0001

style="text-align: center; background-color: pink" colspan="2" |0

|style="text-align: center; background-color: pink" |0

|style="text-align: center; background-color: pink" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |1

style="text-align: center;" colspan="20" |NaN
0

|style="background-color: pink"|111

|style="background-color: pink"|1111

|style="background-color: pink"|1111

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0000

|style="background-color: lightgreen"|0001

style="text-align: center; background-color: pink" colspan="2" |7

|style="text-align: center; background-color: pink" |F

|style="text-align: center; background-color: pink" |F

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center; background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |0

|style="text-align: center;background-color: lightgreen" |1

  • The first one (decimal 3) illustrates that 3 (binary 11) has a single one In the fraction part - the other 1 is assumed.
  • The second one Is an example for which the exponent is 2047 (+ \infty).
  • The third one gives the largest number which can be represented in double precision arithmetic. Note that 1.7976931348623157e+308 + 0.0000000000000001e+308 = inf
  • The next one, the minimum normal, represents the smallest number that can be used with full double precision.
  • The maximum subnormal and the minimum subnormal represent a range of numbers that have less than full double precision.

It is the minimum subnormal, that is crucial for the bisection algorithm.

If

b - a < 9.8813129168249309\times 10^{-324}

(2 X the min.subnormal) the interval can not be divided and the process must stop.

Algorithm

{{cleanup|date=May 2025|reason=This section includes raw source code instead of an algorithm description. Please rewrite using pseudocode or clear steps.}}

import numpy as np

import math

def bisect(f, a, b, tol, bound=9.8813129168249309e-324):

############################################################################E

# input: Function f,

# endpoint values a, b,

# tolerance tol, (if tol = 5e-t and bound = 9.0e-324 the function

# returns t significant digits for a root between the

# minimum normal and the maximum normal),

# bound (if bound=9.8813129168249309e-324, the algorithm continues

# until the interval cannot be further divided, a larger value

# may result in termination before t digits are found).

# conditions: f is a continuous function in the interval [a, b],

# a < b,

# and f(a)*f(b) < 0.

# output: [root, iterations, convergence, termination condition]

#############################################################################N

if b <= a:

return [float("NAN"), 0, "No convergence", "b < a"]

fa = f(a)

fb = f(b)

if np.sign(fa) == np.sign(fb):

return [float("NAN"), 0, "No convergence", "f(a)*f(b) > 0"]

en = 0

while en < 2200:

en += 1

if np.sign(a) == np.sign(b): # avoid overflow

c = a + (b - a)/2

else:

c = (a + b)/2

fc = f(c)

if b - a <= bound:

return [bound, en, "No convergence", "Bound reached"]

if fc == 0:

return [c, en, "Converged", "f(c) = 0"]

if b - a <= abs(c) * tol:

return [c, en, "Converged", "Tolerance"]

if np.sign(fa) == np.sign(fc):

a = c

fa = fc

else:

b = c

return [float("NAN"), en, "No convergence", "Bad function"]

The first 2 examples test for incorrect input values:

1 bisect(lambda x: x - 1, 5, 1, 5.000000e-15, 9.8813129168249309e-324)

Approx. root = nan

No convergence after 0 iterations with termination b < a

Final interval [ nan, nan]

2 bisect(lambda x: x - 1, 5, 7, 5.000000e-15, 9.8813129168249309e-324)

Approx. root = nan

No convergence after 0 iterations with termination f(a)*f(b) > 0

Final interval [ nan, nan]

Large roots:

3 bisect(lambda x: x - 12345678901.23456, 0, 1.23457e+14, 5.000000e-15, 9.8813129168249309e-324)

Approx. root = 12345678901.23454

Converged after 62 iterations with termination Tolerance

Final interval [1.2345678901234526e+10, 1.2345678901234552e+10]

4 bisect(lambda x: x - 1.23456789012456e+100, 0, 2e+100, 5.000000e-15, 9.8813129168249309e-324)

Approx. root = 1.234567890124561e+100

Converged after 50 iterations with termination Tolerance

Final interval [1.2345678901245599e+100, 1.2345678901245619e+100]

The final interval is computed as [c - w/2, c + w/2] where w = {\frac{b - a}{2^n}}. This can give good measure as to the accuracy of the approximation

Root near maximum:

5 bisect(lambda x: x - 1.234567890123456e+307, 0, 1e+308, 5.000000e-15, 9.8813129168249309e-324)

Approx. root = 1.234567890123454e+307

Converged after 52 iterations with termination Tolerance

Final interval [1.2345678901234535e+307, 1.2345678901234555e+307]

Small roots:

6 bisect(lambda x: x - 1.234567890123456e-05, 0, 1, 5.000000e-15, 9.8813129168249309e-324)

Approx. root = 1.234567890123455e-05

Converged after 65 iterations with termination Tolerance

Final interval [1.2345678901234537e-05, 1.2345678901234564e-05]

7 bisect(lambda x: x - 1.234567890123456e-100, 0, 1, 5.000000e-15, 9.8813129168249309e-324)

Approx. root = 1.234567890123454e-100

Converged after 381 iterations with termination Tolerance

Final interval [1.2345678901234532e-100, 1.2345678901234552e-100]

Ex. 8 is beyond the minimum normal but gives a fairly good result because the approximation has a small interval. Calculations for values in the subnormal range can produce unexpected results.

8 bisect(lambda x: x - 1.234567890123457e-310, 0, 1, 5.000000e-15, 9.8813129168249309e-324)

Approx. root = 1.234567890123457e-310

Converged after 1071 iterations with termination f(c) = 0

Final interval [1.2345678901232595e-310, 1.2345678901236548e-310]

If the return state is 'f(c) = 0', then the desired tolerance may not have been achieved. This can be checked by lowering the tolerance until a return state of 'Tolerance' is achieved.

8a bisect(lambda x: x - 1.234567890123457e-310, 0, 1, 5.000000e-13)

Approx. root = 1.234567890123457e-310

Converged after 1071 iterations with termination f(c) = 0

Final interval [1.2345678901232595e-310, 1.2345678901236548e-310]

8b bisect(lambda x: x - 1.234567890123457e-310, 0, 1, 5.000000e-12)

Approx. root = 1.234567890124643e-310

Converged after 1069 iterations with termination Tolerance

Final interval [1.2345678901238524e-310, 1.2345678901254334e-310]

8b shows that the result has 12 digits.

Even though the root is outside the 'normal' range, it may still be possible to achieve results with good tolerance.

9 bisect(lambda x: x - 1.234567891003685e-315, 0, 1, 5.000000e-03, 9.8813129168249309e-324)

Approx. root = 1.23558592808891e-315

Converged after 1055 iterations with termination Tolerance

Final interval [1.2342907646422757e-315, 1.2368810915355439e-315]

1.2368810915355439e-315]

Ex. 10 shows the maximum number of iterations that should be expected:

10 bisect(lambda x: x - 1.234567891003685e-315, -1e+307, 1e+307, 5.000000e-15, 9.8813129168249309e-324)

Approx. root = 1.234567891003685e-315

Converged after 2093 iterations with termination f(c) = 0

Final interval [1.2345678910036845e-315, 1.2345678910036845e-315]

There may be situations in which a 'good' approximation is not required. This can be achieved by changing the 'Bound':

11 bisect(lambda x: x - 1.234567890123457e-100, 0, 1, 5.000000e-15, 4.9999999999999997e-12)

Approx. root = 5e-12

No convergence after 39 iterations with termination Bound reached

Final interval [4.0905052982270715e-12, 5.9094947017729279e-12]

Evaluation of the final interval may assist in determining accuracy.

The following show the behavior of subnormal numbers And shows how the significant digits are lost:

print(1.234567890123456e-310)

1.23456789012346e-310

print(1.234567890123456e-312)

1.234567890124e-312

print(1.234567890123456e-315)

1.23456789e-315

print(1.234567890123456e-317)

1.234568e-317

print(1.234567890123456e-319)

1.23457e-319

print(1.234567890123456e-321)

1.235e-321

print(1.234567890123456e-323)

1e-323

print(1.234567890123456e-324)

0.0

These examples show that this method gives 15 digit accuracy for functions of the form f(x) = (x - r) g(x) for all r in the range of normal numbers.

Higher order roots

Further problems can arise from the use of computer arithmetic for higher order roots.

To help in considering how to detect and correct inaccurate results consider the following:

bisect(lambda x: (x - 1.23456789012345e-100), 0, 1, 5e-15)

Approx. root = 1.23456789012345e-100 Converged after 381 iterations with termination f(c) = 0

Final interval [1.2345678901234491e-100, 1.2345678901234511e-100]

The final interval [1.2345678901234491e-100, 1.2345678901234511e-100] indicates fairly good accuracy. The bisection method has a distinct advantage over other root finding techniques in that the final interval can be used to determine the accuracy of the final solution. This information will be useful in assessing the accuracy of some following examples.

Next consider what happens for a root of order 3:

bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-15)

Approx. root = 1.234567898094279e-100 Converged after 357 iterations with termination f(c) = 0

Final interval [1.2345678810624394e-100, 1.2345679151261181e-100]

The final interval [1.2345678810624394e-100, 1.2345679151261181e-100] indicates that 15 digits have not been returned.

The relative error

(1.234567898094279e-100 - 1.23456789012345e-100)/1.23456789012345e-100

= 6.456371473106003e-09

shows that only 8 digits are correct and again f(c) = 0. This occurs because

\begin{aligned}

f(approx. root) &= f(1.234567898094279*10^{-100}) \\

&= (1.234567898094279*10^{-100} - 1.23456789012345*10^{-100})^3 \\

&= (7.970828885817127*10^{-109})^3 \\

&= 5.064195*10^{2}*10^{-327} \\

&= 5.064195*10^{-325}\end{aligned}

Because this is less than the minimum subnormal, it returns a value of 0.

This can occur in any root finding technique, not just the bisection method, and it is only the fact that the return conditions include the information about what stopping criteria was achieved that the problem can be diagnosed.

The use of the relative error as a stopping condition allows us to determine how accurate a solution can be obtained.

Consider what happens on trying to achieve 8 significant figures:

bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-8)

[1.2345678980942788e-100, 357, 'Converged', 'f(c) = 0']

f(c) = 0 Indicates that eight digits of accuracy have not been achieved, so try

bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-4)

[1.2347947281308757e-100, 344, 'Converged', 'Tolerance']

At least four digits have been achieved and

bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-6)

[1.2345658202098768e-100, 351, 'Converged', 'Tolerance']

6 digit convergence

bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-7)

[1.2345677277758852e-100, 354, 'Converged', 'Tolerance']

7 digit convergence

A similar problem can arise if there are two small roots close together:

bisect(lambda x: (x - 1.23456789012345e-23)*x, 1e-300, 1, 5e-15)

[1.2345678901234481e-23, 125, 'Converged', 'Tolerance']

15 digit convergence

bisect(lambda x: (x - 1.23456789012345e-24)*x, 1e-300, 1e-20, 5e-1)

[1.5509016039626554e-300, 931, 'Converged', 'f(c) = 0']

Final interval [1.2754508019813276e-300, 1.8263524059439830e-300]

relative error = 3.5521376891678086e-1 -- 1 digit convergence

bisect(lambda x: (x - 1.23456789012345e-23)*x, 1e-300, 1, 5e-1)

[1.1580528575742387e-23, 79, 'Converged', 'Tolerance']

Final interval [1.0753347963189360e-23, 1.2407709188295415e-23]

relative error = 1.4285714285714285e-1 -- 1 digit convergence

(The following has not been changed.)

Generalization to higher dimensions

The bisection method has been generalized to multi-dimensional functions. Such methods are called generalized bisection methods.{{Cite journal |last1=Mourrain |first1=B. |last2=Vrahatis |first2=M. N. |last3=Yakoubsohn |first3=J. C. |date=2002-06-01 |title=On the Complexity of Isolating Real Roots and Computing with Certainty the Topological Degree |journal=Journal of Complexity |language=en |volume=18 |issue=2 |pages=612–640 |doi=10.1006/jcom.2001.0636 |issn=0885-064X|doi-access=free }}{{Cite book |last=Vrahatis |first=Michael N. |title=Numerical Computations: Theory and Algorithms |chapter=Generalizations of the Intermediate Value Theorem for Approximating Fixed Points and Zeros of Continuous Functions |series=Lecture Notes in Computer Science |date=2020 |volume=11974 |editor-last=Sergeyev |editor-first=Yaroslav D. |editor2-last=Kvasov |editor2-first=Dmitri E. |chapter-url=https://link.springer.com/chapter/10.1007/978-3-030-40616-5_17 |language=en |location=Cham |publisher=Springer International Publishing |pages=223–238 |doi=10.1007/978-3-030-40616-5_17 |isbn=978-3-030-40616-5|s2cid=211160947 }}

= Methods based on degree computation =

Some of these methods are based on computing the topological degree, which for a bounded region \Omega \subseteq \mathbb{R}^n and a differentiable function f: \mathbb{R}^n \rightarrow \mathbb{R}^n is defined as a sum over its roots:

:\deg(f, \Omega) := \sum_{y\in f^{-1}(\mathbf{0})} \sgn \det(Df(y)),

where Df(y) is the Jacobian matrix, \mathbf{0} = (0,0,...,0)^T, and

:\sgn(x) = \begin{cases}

1, & x>0 \\

0, & x=0 \\

-1, & x<0 \\

\end{cases}

is the sign function.{{cite journal |last1=Polymilis |first1=C. |last2=Servizi |first2=G. |last3=Turchetti |first3=G. |last4=Skokos |first4=Ch. |last5=Vrahatis |first5=M. N. |journal=Libration Point Orbits and Applications |title=Locating Periodic Orbits by Topological Degree Theory |date=May 2003 |pages=665–676 |doi=10.1142/9789812704849_0031 |arxiv=nlin/0211044 |isbn=978-981-238-363-1 }} In order for a root to exist, it is sufficient that \deg(f, \Omega) \neq 0, and this can be verified using a surface integral over the boundary of \Omega.{{Cite journal |last=Kearfott |first=Baker |date=1979-06-01 |title=An efficient degree-computation method for a generalized method of bisection |url=https://doi.org/10.1007/BF01404868 |journal=Numerische Mathematik |language=en |volume=32 |issue=2 |pages=109–127 |doi=10.1007/BF01404868 |s2cid=122058552 |issn=0945-3245|url-access=subscription }}

= Characteristic bisection method =

The characteristic bisection method uses only the signs of a function in different points. Lef f be a function from Rd to Rd, for some integer d ≥ 2. A characteristic polyhedron{{Cite journal |last=Vrahatis |first=Michael N. |date=1995-06-01 |title=An Efficient Method for Locating and Computing Periodic Orbits of Nonlinear Mappings |url=https://www.sciencedirect.com/science/article/pii/S0021999185711199 |journal=Journal of Computational Physics |language=en |volume=119 |issue=1 |pages=105–119 |doi=10.1006/jcph.1995.1119 |bibcode=1995JCoPh.119..105V |issn=0021-9991|url-access=subscription }} (also called an admissible polygon){{Cite journal |last1=Vrahatis |first1=M. N. |last2=Iordanidis |first2=K. I. |date=1986-03-01 |title=A rapid Generalized Method of Bisection for solving Systems of Non-linear Equations |url=https://doi.org/10.1007/BF01389620 |journal=Numerische Mathematik |language=en |volume=49 |issue=2 |pages=123–138 |doi=10.1007/BF01389620 |s2cid=121771945 |issn=0945-3245|url-access=subscription }} of f is a polytope in Rd, having 2d vertices, such that in each vertex v, the combination of signs of f(v) is unique and the topological degree of f on its interior is not zero (a necessary criterion to ensure the existence of a root).{{cite journal |last1=Vrahatis |first1=M.N. |last2=Perdiou |first2=A.E. |last3=Kalantonis |first3=V.S. |last4=Perdios |first4=E.A. |last5=Papadakis |first5=K. |last6=Prosmiti |first6=R. |last7=Farantos |first7=S.C. |title=Application of the Characteristic Bisection Method for locating and computing periodic orbits in molecular systems |journal=Computer Physics Communications |date=July 2001 |volume=138 |issue=1 |pages=53–68 |doi=10.1016/S0010-4655(01)00190-4|bibcode=2001CoPhC.138...53V }} For example, for d=2, a characteristic polyhedron of f is a quadrilateral with vertices (say) A,B,C,D, such that:

  • {{tmath|1=\sgn f(A) = (-,-)}}, that is, f1(A)<0, f2(A)<0.
  • {{tmath|1=\sgn f(B) = (-,+)}}, that is, f1(B)<0, f2(B)>0.
  • {{tmath|1=\sgn f(C) = (+,-)}}, that is, f1(C)>0, f2(C)<0.
  • {{tmath|1=\sgn f(D) = (+,+)}}, that is, f1(D)>0, f2(D)>0.

A proper edge of a characteristic polygon is a edge between a pair of vertices, such that the sign vector differs by only a single sign. In the above example, the proper edges of the characteristic quadrilateral are AB, AC, BD and CD. A diagonal is a pair of vertices, such that the sign vector differs by all d signs. In the above example, the diagonals are AD and BC.

At each iteration, the algorithm picks a proper edge of the polyhedron (say, A{{--}}B), and computes the signs of f in its mid-point (say, M). Then it proceeds as follows:

  • If {{tmath|1=\sgn f(M) = \sgn(A)}}, then A is replaced by M, and we get a smaller characteristic polyhedron.
  • If {{tmath|1=\sgn f(M) = \sgn(B)}}, then B is replaced by M, and we get a smaller characteristic polyhedron.
  • Else, we pick a new proper edge and try again.

Suppose the diameter (= length of longest proper edge) of the original characteristic polyhedron is {{mvar|D}}. Then, at least \log_2(D/\varepsilon) bisections of edges are required so that the diameter of the remaining polygon will be at most {{mvar|ε}}.{{Rp|page=11|location=Lemma.4.7}} If the topological degree of the initial polyhedron is not zero, then there is a procedure that can choose an edge such that the next polyhedron also has nonzero degree.{{cite journal |last1=Vrahatis |first1=Michael N. |title=Solving systems of nonlinear equations using the nonzero value of the topological degree |journal=ACM Transactions on Mathematical Software |date=December 1988 |volume=14 |issue=4 |pages=312–329 |doi=10.1145/50063.214384}}

See also

References

{{reflist|30em}}

  • {{cite book | last1=Burden | first1=Richard L. | last2=Faires | first2=J. Douglas | title=Numerical Analysis | publisher=Cengage Learning | edition=10th | year=2014 | chapter=2.1 The Bisection Algorithm | isbn=978-0-87150-857-7 | url-access=registration | url=https://archive.org/details/numericalanalys00burd }}

Further reading

  • {{cite journal | last1=Corliss | first1=George | title=Which root does the bisection algorithm find? | year=1977 | journal=SIAM Review | issn=1095-7200 | volume=19 | issue=2 | pages=325–327 | doi=10.1137/1019044 |ref=none}}
  • {{cite book | last1=Kaw | first1=Autar | last2=Kalu | first2=Egwu | year=2008 | title=Numerical Methods with Applications | edition=1st | url=http://numericalmethods.eng.usf.edu/topics/textbook_index.html | ref=none | url-status=dead | archive-url=https://web.archive.org/web/20090413123941/http://numericalmethods.eng.usf.edu/topics/textbook_index.html | archive-date=2009-04-13 }}