Bisection and Fixed-Point Iteration
Root-finding algorithms solve equations of the form where is continuous. The bisection method and fixed-point iteration provide fundamental approaches with different convergence guarantees.
Given a continuous function with , the bisection method iteratively halves the interval:
- Compute
- If , stop
- If , set ; otherwise set
- Repeat until
The intermediate value theorem guarantees a root . After iterations, , giving linear convergence with rate .
The bisection method is robust and always converges when and have opposite signs, but converges slowly. It requires only function evaluations, not derivatives, making it suitable for non-smooth functions.
A fixed point of is a value satisfying . The fixed-point iteration is:
To solve , rewrite as where for some .
To compute , solve . Rewriting as gives the iteration:
This is Newton's method for . For with :
The error decreases quadratically: .
Convergence Theorem: If with and for all , then:
- has a unique fixed point
- The iteration converges to for any
The constant is the contraction factor. Convergence is linear with rate .
The choice of dramatically affects convergence. For , setting yields divergent oscillation, while converges quadratically. This motivates Newton's method, which automatically constructs an optimal using derivative information.
Practical implementations incorporate stopping criteria based on both absolute error and relative error , along with maximum iteration limits. The bisection method provides guaranteed but slow convergence, while fixed-point iterations can converge rapidly if is well-chosen but may diverge otherwise.