ProofComplete

Proof: Multiplicativity of Determinants

We prove the fundamental multiplicativity property of determinants, which states that det(AB)=det(A)det(B)\det(AB) = \det(A)\det(B) for any square matrices AA and BB.

ProofMultiplicativity of Determinants

Theorem: For n×nn \times n matrices AA and BB, det(AB)=det(A)det(B)\det(AB) = \det(A)\det(B).

Proof: We prove this using properties of elementary matrices and the relationship between row operations and determinants.

Step 1: Prove the result when AA is an elementary matrix.

There are three types of elementary matrices:

Type I (row swap): If EE swaps two rows, then det(E)=1\det(E) = -1, and multiplying by EE on the left swaps two rows of BB, so: det(EB)=det(B)=det(E)det(B)\det(EB) = -\det(B) = \det(E)\det(B)

Type II (row scaling): If EE multiplies a row by scalar cc, then det(E)=c\det(E) = c, and multiplying by EE scales a row of BB by cc, so: det(EB)=cdet(B)=det(E)det(B)\det(EB) = c\det(B) = \det(E)\det(B)

Type III (row addition): If EE adds a multiple of one row to another, then det(E)=1\det(E) = 1, and this operation doesn't change the determinant of BB: det(EB)=det(B)=1det(B)=det(E)det(B)\det(EB) = \det(B) = 1 \cdot \det(B) = \det(E)\det(B)

Thus the theorem holds when AA is any elementary matrix.

Step 2: Extend to invertible matrices using factorization.

If AA is invertible, it can be written as a product of elementary matrices: A=E1E2EkA = E_1E_2 \cdots E_k

Then by repeated application of Step 1: det(AB)=det(E1E2EkB)=det(E1)det(E2EkB)\det(AB) = \det(E_1E_2 \cdots E_kB) = \det(E_1)\det(E_2 \cdots E_kB) =det(E1)det(E2)det(E3EkB)= \det(E_1)\det(E_2)\det(E_3 \cdots E_kB) =det(E1)det(E2)det(Ek)det(B)= \det(E_1)\det(E_2) \cdots \det(E_k)\det(B) =det(E1E2Ek)det(B)=det(A)det(B)= \det(E_1E_2 \cdots E_k)\det(B) = \det(A)\det(B)

Step 3: Extend to all matrices.

If AA is not invertible, then det(A)=0\det(A) = 0. We claim that ABAB is also not invertible (so det(AB)=0\det(AB) = 0).

Suppose, for contradiction, that ABAB were invertible. Then there exists (AB)1(AB)^{-1} such that: (AB)(AB)1=I(AB)(AB)^{-1} = I

Multiplying on the left by B1A1B^{-1}A^{-1} (if these exist) or rearranging doesn't work directly. Instead, we use rank arguments:

If AA is not invertible, then ker(A){0}\ker(A) \neq \{\mathbf{0}\}. There exists v0\mathbf{v} \neq \mathbf{0} with Av=0A\mathbf{v} = \mathbf{0}.

Then (AB)v=A(Bv)=0(AB)\mathbf{v} = A(B\mathbf{v}) = \mathbf{0} as well, so ABAB has nontrivial kernel and cannot be invertible.

Therefore det(AB)=0=det(A)det(B)\det(AB) = 0 = \det(A)\det(B).

More rigorously: rank(AB)rank(A)<n\text{rank}(AB) \leq \text{rank}(A) < n, so the columns of ABAB are linearly dependent, giving det(AB)=0\det(AB) = 0.

Alternative approach for Step 3: View determinant as a function. Define f(A)=det(AB)f(A) = \det(AB) for fixed BB. This function:

  • Is multilinear in the rows of AA
  • Is alternating (swapping rows changes sign)
  • Satisfies f(I)=det(B)f(I) = \det(B)

By uniqueness of the determinant function with these properties, f(A)=det(A)det(B)f(A) = \det(A) \cdot \det(B).

Conclusion: In all cases, det(AB)=det(A)det(B)\det(AB) = \det(A)\det(B). ∎

Remark

This proof demonstrates a powerful technique: prove a result for elementary matrices (where calculation is direct), extend to invertible matrices (via elementary matrix factorization), then handle the degenerate case. This strategy appears throughout linear algebra, leveraging the fact that elementary matrices form "building blocks" for all matrices.

The multiplicativity property is the foundation for many applications: computing determinants of matrix products, understanding similarity transformations, and connecting determinants to eigenvalues through characteristic polynomials.