Muller's method
{{Short description|Algorithm for finding roots of a function}}
{{More footnotes|date=February 2020}}
Muller's method is a root-finding algorithm, a numerical method for solving equations of the form f(x) = 0. It was first presented by David E. Muller in 1956.
Muller's method proceeds according to a third-order recurrence relation similar to the second-order recurrence relation of the secant method. Whereas the secant method proceeds by constructing a line through two points on the graph of f corresponding to the last two iterative approximations and then uses the line's root as the next approximation at every iteration, by contrast, Muller's method uses three points corresponding to the last three iterative approximations, constructs a parabola through these three points, and then uses a root of the parabola as the next approximation at every iteration.
Derivation
Muller's method uses three initial approximations of the root, and , and determines the next approximation by considering the intersection of the x-axis with the parabola through , and .
Consider the quadratic polynomial
{{NumBlk|:||{{EquationRef|1}}}}
that passes through , and . Define the differences
h_{0} = x_{1} - x_{0}, \quad h_{1} = x_{2} - x_{1}
and
\delta_{0} = \frac{f(x_{1}) - f(x_{0})}{h_{0}}, \quad
\delta_{1} = \frac{f(x_{2}) - f(x_{1})}{h_{1}}.
Substituting each of the three points , and into equation ({{EquationNote|1}}) and solving simultaneously for and gives
a = \frac{\delta_{1} - \delta_{0}}{h_{1} + h_{0}}, \quad b = ah_{1} + \delta_{1}, \quad c = f(x_{2})
The quadratic formula is then applied to ({{EquationNote|1}}) to determine as
:
x_{3} - x_{2} = \frac{-2c}{b \pm \sqrt{b^2 - 4ac}}.
The sign preceding the radical term is chosen to match the sign of to ensure the next iterate is closest to , giving
:
x_{3} = x_{2} - \frac{2c}{b + sign(b)\sqrt{b^2 - 4ac}}.
Once is determined, the process is repeated. Note that due to the radical expression in the denominator, iterates can be complex even when the previous iterates are all real. This is in contrast with other root-finding algorithms like the secant method, Sidi's generalized secant method or Newton's method, whose iterates will remain real if one starts with real numbers. Having complex iterates can be an advantage (if one is looking for complex roots) or a disadvantage (if it is known that all roots are real), depending on the problem.
Speed of convergence
For well-behaved functions, the order of convergence of Muller's method is approximately 1.839 and exactly the tribonacci constant. This can be compared with approximately 1.618, exactly the golden ratio, for the secant method and with exactly 2 for Newton's method. So, the secant method makes less progress per iteration than Muller's method and Newton's method makes more progress.
More precisely, if ξ denotes a single root of f (so f(ξ) = 0 and f
:
where μ ≈ 1.84 is the positive solution of , the defining equation for the tribonacci constant.
Computational example
Below, Muller's method is implemented in the Python programming language. It takes as parameters the three initial estimates of the root, as well as the desired decimals places of accuracy and the maximum number of iterations. The program is then applied to find a root of the function {{math|f(x) {{=}} x2 − 612}}.
role="presentation" class="wikitable mw-collapsible mw-collapsed"
|Example implementation in Python |
from cmath import sqrt # Use the complex sqrt as we may generate complex numbers def func(x): return (x ** 2) - 612 def muller(x0: int, x1: int, x2: int, decimal_places: int, maximum_iterations: int): iteration_counter = 0 iterates = [x0, x1, x2] solution_found = False while not solution_found and iteration_counter < maximum_iterations: final_index = len(iterates) - 1 h0 = iterates[final_index - 1] - iterates[final_index - 2] h1 = iterates[final_index] - iterates[final_index - 1] f_x0 = func(iterates[final_index - 2]) f_x1 = func(iterates[final_index - 1]) f_x2 = func(iterates[final_index]) delta0 = (f_x1 - f_x0) / h0 delta1 = (f_x2 - f_x1) / h1 coeff_a = (delta1 - delta0) / (h1 + h0) coeff_b = coeff_a * h1 + delta1 coeff_c = f_x2 sqrt_delta = sqrt(pow(coeff_b, 2) - 4 * coeff_a * coeff_c) denominators = [coeff_b - sqrt_delta, coeff_b + sqrt_delta] # Take the higher-magnitude denominator next_iterate = iterates[final_index] - (2 * coeff_c) / max(denominators, key=abs) iterates.append(next_iterate) solution_found = abs(func(next_iterate)) < pow(10, -decimal_places) iteration_counter = iteration_counter + 1 if solution_found: print("Solution found: {}".format(next_iterate)) else: print("No solution found.") muller(10, 20, 30, 9, 20) # Solution found: (24.73863375370596+0j) |
See also
- Halley's method, with cubic convergence
- Householder's method, includes Newton's, Halley's and higher-order convergence
References
{{Reflist}}
- Atkinson, Kendall E. (1989). An Introduction to Numerical Analysis, 2nd edition, Section 2.4. John Wiley & Sons, New York. {{ISBN|0-471-50023-2}}.
- Burden, R. L. and Faires, J. D. Numerical Analysis, 4th edition, pages 77ff.
- {{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press | publication-place=New York | isbn=978-0-521-88068-8 | chapter=Section 9.5.2. Muller's Method | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=466}}
Further reading
- A bracketing variant with global convergence: {{cite journal |last1=Costabile |first1=F. |last2=Gualtieri |first2=M.I. |last3=Luceri |first3=R. |title=A modification of Muller's method |journal=Calcolo |date=March 2006 |volume=43 |issue=1 |pages=39–50 |doi=10.1007/s10092-006-0113-9|s2cid=124772103 }}
{{Root-finding algorithms}}
{{DEFAULTSORT:Mullers method}}