Fixed-Point Theorem
The fixed-point theorem provides conditions under which iterative methods converge, forming the theoretical foundation for analyzing root-finding algorithms including Newton's method.
Let satisfy:
- (continuity)
- (maps interval to itself)
- for all (Lipschitz with constant )
Then:
- has a unique fixed point (i.e., )
- For any , the iteration converges to
- The error satisfies:
The constant is called the contraction factor. Convergence is linear with asymptotic error constant . Smaller means faster convergence; near 1 implies slow convergence.
Relationship to Newton's Method: Newton's method is a fixed-point iteration with:
Computing :
At a simple root where and , we have , explaining Newton's quadratic (not just linear) convergence.
The equation has a unique solution (since crosses once). Using :
- on
The iteration converges with rate :
Convergence is relatively slow due to near 1. Accelerated methods like Aitken's process can improve convergence rate.
The theorem extends to multidimensional systems: for with , if the Jacobian , convergence is guaranteed. This principle underpins Newton's method for systems of nonlinear equations.
Practical considerations include choosing to minimize . For , rewriting as with optimal can accelerate fixed-point iteration, though Newton's choice is typically best.