Positive Definite Matrices
Positive definite matrices are symmetric matrices with positive eigenvalues. They arise naturally in optimization, statistics, and geometry as measures of "positive curvature."
A symmetric matrix is:
- Positive definite (PD) if for all
- Positive semi-definite (PSD) if for all
- Negative definite if for all
- Indefinite if takes both positive and negative values
The quadratic form generalizes the notion of squared length.
For symmetric matrix , the following are equivalent:
- is positive definite
- All eigenvalues of are positive
- All leading principal minors are positive (Sylvester's criterion)
- There exists invertible such that (Cholesky decomposition)
- where all diagonal entries of are positive
Consider .
Method 1 (Eigenvalues): Characteristic polynomial gives . Both positive, so is PD.
Method 2 (Sylvester): Leading principal minors are and , confirming PD.
Method 3 (Quadratic form): for .
If is positive definite with spectral decomposition , then the matrix square root is:
where has diagonal entries . This satisfies .
Similarly, is well-defined.
- Covariance matrices in statistics are always PSD; if variables are linearly independent, the matrix is PD
- Hessian matrices in optimization: PD Hessian implies local minimum
- Inner products: defines an inner product iff is PD
- Ellipsoids: The set is an ellipsoid when is PD
Positive definiteness is a geometric property: it means the associated quadratic form is a "bowl" opening upward with a unique minimum at the origin. In machine learning, kernel matrices must be PSD for valid similarity measures. In numerical analysis, PD systems can be solved efficiently via Cholesky decomposition, which is faster and more stable than LU decomposition.