# Newton's Method Algorithm

Shodor > CSERD > Resources > Algorithms > Newton's Method Algorithm

# Newton's Method

Newton's method is an algorithm for finding the root of an equation of a single variable.

In Newton's method, the root of a single equation of one independant variable is determined in the following way.

The equation is first written in a homogeneous form.

 f(x) = 0

Students often are first introduced to homgeneous form with the quadratic equation, where an equation for x which has both x terms and x2 terms is written in the form a x2 + b x + c = 0.

Once the equation is in homogeneous form, the derivative f '(x) is determined. An initial guess for the root of x, x0 is made, and a new value of x is computed as

 xi = xi-1 - f(xi-1)/f '(xi-1)
Provided a root exists, and provided that f '(xi-1) does not equal zero, each iteration results in successive values of xi which approach one of the roots of the equation.

The process is carried out until either some converegence criterion is met (usually two successive iterations have a percent difference less than some limit) or until it is determined that the process is not likely to converge (usually by setting a limit on the number of iterations).

The following applet shows graphically how Newton's method is used to solve for the roots of an equation. Click on the Newton's Method example to open the applet.