TheoremComplete

Fixed-Point Theorem

The fixed-point theorem provides conditions under which iterative methods xn+1=g(xn)x_{n+1} = g(x_n) converge, forming the theoretical foundation for analyzing root-finding algorithms including Newton's method.

TheoremContraction Mapping Fixed-Point Theorem

Let g:[a,b]Rg: [a,b] \to \mathbb{R} satisfy:

  1. gC[a,b]g \in C[a,b] (continuity)
  2. g([a,b])[a,b]g([a,b]) \subseteq [a,b] (maps interval to itself)
  3. k(0,1):g(x)k\exists k \in (0,1): |g'(x)| \leq k for all x(a,b)x \in (a,b) (Lipschitz with constant k<1k < 1)

Then:

  • gg has a unique fixed point p[a,b]p \in [a,b] (i.e., g(p)=pg(p) = p)
  • For any x0[a,b]x_0 \in [a,b], the iteration xn+1=g(xn)x_{n+1} = g(x_n) converges to pp
  • The error satisfies: xnpkn1kx1x0|x_n - p| \leq \frac{k^n}{1-k}|x_1 - x_0| xnpk1kxnxn1|x_n - p| \leq \frac{k}{1-k}|x_n - x_{n-1}|

The constant kk is called the contraction factor. Convergence is linear with asymptotic error constant kk. Smaller kk means faster convergence; kk near 1 implies slow convergence.

Remark

Relationship to Newton's Method: Newton's method xn+1=xnf(xn)/f(xn)x_{n+1} = x_n - f(x_n)/f'(x_n) is a fixed-point iteration with: g(x)=xf(x)f(x)g(x) = x - \frac{f(x)}{f'(x)}

Computing g(x)g'(x): g(x)=1f(x)2f(x)f(x)f(x)2=f(x)f(x)f(x)2g'(x) = 1 - \frac{f'(x)^2 - f(x)f''(x)}{f'(x)^2} = \frac{f(x)f''(x)}{f'(x)^2}

At a simple root rr where f(r)=0f(r) = 0 and f(r)0f'(r) \neq 0, we have g(r)=0g'(r) = 0, explaining Newton's quadratic (not just linear) convergence.

ExampleSolving $x = \cos(x)$

The equation x=cos(x)x = \cos(x) has a unique solution (since cos(x)\cos(x) crosses y=xy = x once). Using g(x)=cos(x)g(x) = \cos(x):

  • g([0,1])=[cos(1),1][0.540,1][0,1]g([0,1]) = [\cos(1), 1] \approx [0.540, 1] \subset [0,1]
  • g(x)=sin(x)sin(1)0.841<1|g'(x)| = |\sin(x)| \leq \sin(1) \approx 0.841 < 1 on [0,1][0,1]

The iteration converges with rate k0.841k \approx 0.841:

  • x0=1x_0 = 1
  • x1=cos(1)0.540x_1 = \cos(1) \approx 0.540
  • x50.768x_5 \approx 0.768
  • x200.739085133x_{20} \approx 0.739085133

Convergence is relatively slow due to kk near 1. Accelerated methods like Aitken's Δ2\Delta^2 process can improve convergence rate.

The theorem extends to multidimensional systems: for xn+1=g(xn)\mathbf{x}_{n+1} = \mathbf{g}(\mathbf{x}_n) with g:RnRn\mathbf{g}: \mathbb{R}^n \to \mathbb{R}^n, if the Jacobian Jgk<1\|\mathbf{J}_g\| \leq k < 1, convergence is guaranteed. This principle underpins Newton's method for systems of nonlinear equations.

Practical considerations include choosing gg to minimize kk. For f(x)=0f(x) = 0, rewriting as x=x+αf(x)x = x + \alpha f(x) with optimal α\alpha can accelerate fixed-point iteration, though Newton's choice α=1/f(x)\alpha = -1/f'(x) is typically best.