ConceptComplete

Bisection and Fixed-Point Iteration

Root-finding algorithms solve equations of the form f(x)=0f(x) = 0 where f:RRf: \mathbb{R} \to \mathbb{R} is continuous. The bisection method and fixed-point iteration provide fundamental approaches with different convergence guarantees.

DefinitionBisection Method

Given a continuous function f:[a,b]Rf: [a,b] \to \mathbb{R} with f(a)f(b)<0f(a)f(b) < 0, the bisection method iteratively halves the interval:

  1. Compute c=(a+b)/2c = (a+b)/2
  2. If f(c)=0f(c) = 0, stop
  3. If f(a)f(c)<0f(a)f(c) < 0, set b:=cb := c; otherwise set a:=ca := c
  4. Repeat until ba<ϵ|b-a| < \epsilon

The intermediate value theorem guarantees a root r[a,b]r \in [a,b]. After nn iterations, rxn(ba)/2n+1|r - x_n| \leq (b-a)/2^{n+1}, giving linear convergence with rate 1/21/2.

The bisection method is robust and always converges when f(a)f(a) and f(b)f(b) have opposite signs, but converges slowly. It requires only function evaluations, not derivatives, making it suitable for non-smooth functions.

DefinitionFixed-Point Iteration

A fixed point of g:RRg: \mathbb{R} \to \mathbb{R} is a value pp satisfying g(p)=pg(p) = p. The fixed-point iteration is: xn+1=g(xn),n=0,1,2,x_{n+1} = g(x_n), \quad n = 0,1,2,\ldots

To solve f(x)=0f(x) = 0, rewrite as x=g(x)x = g(x) where g(x)=x+αf(x)g(x) = x + \alpha f(x) for some α0\alpha \neq 0.

ExampleSquare Root Computation

To compute a\sqrt{a}, solve x2a=0x^2 - a = 0. Rewriting as x=(x+a/x)/2x = (x + a/x)/2 gives the iteration: xn+1=12(xn+axn)x_{n+1} = \frac{1}{2}\left(x_n + \frac{a}{x_n}\right)

This is Newton's method for f(x)=x2af(x) = x^2 - a. For a=2a = 2 with x0=1x_0 = 1:

  • x1=(1+2)/2=1.5x_1 = (1 + 2)/2 = 1.5
  • x2=(1.5+2/1.5)/2=1.416x_2 = (1.5 + 2/1.5)/2 = 1.41\overline{6}
  • x3=1.414215686...x_3 = 1.414215686...

The error decreases quadratically: x322×109|x_3 - \sqrt{2}| \approx 2 \times 10^{-9}.

Remark

Convergence Theorem: If gC[a,b]g \in C[a,b] with g([a,b])[a,b]g([a,b]) \subseteq [a,b] and g(x)k<1|g'(x)| \leq k < 1 for all x(a,b)x \in (a,b), then:

  1. gg has a unique fixed point p[a,b]p \in [a,b]
  2. The iteration xn+1=g(xn)x_{n+1} = g(x_n) converges to pp for any x0[a,b]x_0 \in [a,b]
  3. pxnknpx0/(1k)|p - x_n| \leq k^n|p - x_0|/(1-k)

The constant kk is the contraction factor. Convergence is linear with rate kk.

The choice of gg dramatically affects convergence. For f(x)=x2af(x) = x^2 - a, setting g(x)=a/xg(x) = a/x yields divergent oscillation, while g(x)=(x+a/x)/2g(x) = (x + a/x)/2 converges quadratically. This motivates Newton's method, which automatically constructs an optimal gg using derivative information.

Practical implementations incorporate stopping criteria based on both absolute error xn+1xn|x_{n+1} - x_n| and relative error xn+1xn/xn+1|x_{n+1} - x_n|/|x_{n+1}|, along with maximum iteration limits. The bisection method provides guaranteed but slow convergence, while fixed-point iterations can converge rapidly if gg is well-chosen but may diverge otherwise.