ProofComplete

Proof: Linear Independence of Eigenvectors

We prove that eigenvectors corresponding to distinct eigenvalues are linearly independent. This result is fundamental to understanding diagonalization.

ProofLinear Independence of Eigenvectors

Theorem: Let AA be an nΓ—nn \times n matrix. If v1,v2,…,vk\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_k are eigenvectors corresponding to distinct eigenvalues Ξ»1,Ξ»2,…,Ξ»k\lambda_1, \lambda_2, \ldots, \lambda_k, then {v1,v2,…,vk}\{\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_k\} is linearly independent.

Proof: We use proof by induction on kk.

Base case (k=1k = 1): A single eigenvector v1\mathbf{v}_1 is nonzero by definition, hence linearly independent.

Inductive step: Assume the result holds for kβˆ’1k-1 eigenvectors. We prove it for kk eigenvectors.

Suppose for contradiction that {v1,…,vk}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} is linearly dependent. Then there exist scalars c1,…,ckc_1, \ldots, c_k, not all zero, such that: c1v1+c2v2+β‹―+ckvk=0...(1)c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k = \mathbf{0} \quad \text{...(1)}

Since not all cic_i are zero, we can assume without loss of generality that ck≠0c_k \neq 0 (by reordering if necessary).

Step 1: Apply matrix AA to equation (1): A(c1v1+c2v2+β‹―+ckvk)=A(0)=0A(c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k) = A(\mathbf{0}) = \mathbf{0}

By linearity: c1Av1+c2Av2+β‹―+ckAvk=0c_1A\mathbf{v}_1 + c_2A\mathbf{v}_2 + \cdots + c_kA\mathbf{v}_k = \mathbf{0}

Since Avi=Ξ»iviA\mathbf{v}_i = \lambda_i\mathbf{v}_i for each ii: c1Ξ»1v1+c2Ξ»2v2+β‹―+ckΞ»kvk=0...(2)c_1\lambda_1\mathbf{v}_1 + c_2\lambda_2\mathbf{v}_2 + \cdots + c_k\lambda_k\mathbf{v}_k = \mathbf{0} \quad \text{...(2)}

Step 2: Multiply equation (1) by Ξ»k\lambda_k: c1Ξ»kv1+c2Ξ»kv2+β‹―+ckΞ»kvk=0...(3)c_1\lambda_k\mathbf{v}_1 + c_2\lambda_k\mathbf{v}_2 + \cdots + c_k\lambda_k\mathbf{v}_k = \mathbf{0} \quad \text{...(3)}

Step 3: Subtract equation (3) from equation (2): c1(Ξ»1βˆ’Ξ»k)v1+c2(Ξ»2βˆ’Ξ»k)v2+β‹―+ckβˆ’1(Ξ»kβˆ’1βˆ’Ξ»k)vkβˆ’1=0c_1(\lambda_1 - \lambda_k)\mathbf{v}_1 + c_2(\lambda_2 - \lambda_k)\mathbf{v}_2 + \cdots + c_{k-1}(\lambda_{k-1} - \lambda_k)\mathbf{v}_{k-1} = \mathbf{0}

Note that the term involving vk\mathbf{v}_k cancels: ck(Ξ»kβˆ’Ξ»k)vk=0c_k(\lambda_k - \lambda_k)\mathbf{v}_k = \mathbf{0}.

Step 4: Apply the inductive hypothesis. By assumption, {v1,…,vkβˆ’1}\{\mathbf{v}_1, \ldots, \mathbf{v}_{k-1}\} is linearly independent (since these correspond to distinct eigenvalues Ξ»1,…,Ξ»kβˆ’1\lambda_1, \ldots, \lambda_{k-1}).

Therefore, all coefficients in the above equation must be zero: ci(Ξ»iβˆ’Ξ»k)=0forΒ i=1,2,…,kβˆ’1c_i(\lambda_i - \lambda_k) = 0 \quad \text{for } i = 1, 2, \ldots, k-1

Step 5: Reach a contradiction. Since the eigenvalues are distinct, Ξ»iβ‰ Ξ»k\lambda_i \neq \lambda_k for i<ki < k, so Ξ»iβˆ’Ξ»kβ‰ 0\lambda_i - \lambda_k \neq 0 for all i=1,…,kβˆ’1i = 1, \ldots, k-1.

Therefore: c1=c2=β‹―=ckβˆ’1=0c_1 = c_2 = \cdots = c_{k-1} = 0

Substituting back into equation (1): ckvk=0c_k\mathbf{v}_k = \mathbf{0}

Since vk\mathbf{v}_k is an eigenvector, vk≠0\mathbf{v}_k \neq \mathbf{0}. Therefore ck=0c_k = 0.

But this contradicts our assumption that not all cic_i are zero.

Conclusion: The assumption that {v1,…,vk}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} is linearly dependent leads to a contradiction. Therefore, the eigenvectors are linearly independent. ∎

β– 
Remark

This proof employs a powerful technique: applying the linear operator to a linear dependence relation and subtracting a scaled version of the original relation to eliminate terms. This "operator manipulation" strategy appears throughout linear algebra.

The result immediately implies: if an nΓ—nn \times n matrix has nn distinct eigenvalues, it has nn linearly independent eigenvectors and is therefore diagonalizable. This provides a simple sufficient condition for diagonalizability.