The Newton's method

From testwiki
Jump to navigation Jump to search

Template:Merge to Template:S Template:Center topTemplate:Font
Template:Center bottom
Template:Navigation bar


Newton's method generates a sequence xk to find the root α of a function f starting from an initial guess x0. This initial guess x0 should be close enough to the root α for the convergence to be guaranteed. We construct the tangent of f at x0 and we find an approximation of α by computing the root of the tangent. Repeating this iterative process we obtain the sequence xk.

Derivation of Newton's Method

Approximating f(x) with a second order Taylor expansion around xk,

f(x)=f(xk)+f(xk)(xxk)+f(ηk)2(xxk)2,

with ηk between x and xk. Imposing x=α and recalling that f(α)=0, with a little rearranging we obtain

α=xkf(xk)f(xk)(αxk)22f(ηk)f(xk).

Neglecting the last term, we find an approximation of α which we shall call xk+1. We now have an iteration which can be used to find successively more precise approximations of α:

Newton's method :

xk+1=xkf(xk)f(xk).

Convergence Analysis

It's clear from the derivation that the error of Newton's method is given by

Newton's method error formula:

αxk+1=f(ηk)2f(xk)(αxk)2.

From this we note that if the method converges, then the order of convergence is 2. On the other hand, the convergence of Newton's method depends on the initial guess x0.

The following theorem holds

Theorem

Assume that f(x),f(x), and f(x) are continuous in neighborhood of the root α and that f(α)0. Then, taken x0 close enough to α, the sequence xk, with k0, defined by the Newton's method converges to α. Moreover the order of convergence is p=2, as

limkαxk+1(αxk)2=f(α)2f(α).


Advantages and Disadvantages of the Newton-Raphson Method

Advantages of using Newton's method to approximate a root rest primarily in its rate of convergence. When the method converges, it does so quadratically. Also, the method is very simple to apply and has great local convergence.

The disadvantages of using this method are numerous. First of all, it is not guaranteed that Newton's method will converge if we select an x0 that is too far from the exact root. Likewise, if our tangent line becomes parallel or almost parallel to the x-axis, we are not guaranteed convergence with the use of this method. Also, because we have two functions to evaluate with each iteration (f(xk) and f(xk), this method is computationally expensive. Another disadvantage is that we must have a functional representation of the derivative of our function, which is not always possible if we working only from given data.