ConceptComplete

Inverse Matrices

A square matrix is invertible if there exists another matrix that "undoes" its effect. Invertibility is the matrix analogue of bijectivity for linear maps.


Definition

Definition3.5Invertible matrix

A square matrix A∈MnΓ—n(F)A \in M_{n \times n}(F) is invertible (or nonsingular) if there exists a matrix B∈MnΓ—n(F)B \in M_{n \times n}(F) such that

AB=BA=In.AB = BA = I_n.

The matrix BB is called the inverse of AA and is denoted Aβˆ’1A^{-1}. If no such BB exists, AA is singular (non-invertible).

Theorem3.1Uniqueness of inverse

If AA is invertible, then Aβˆ’1A^{-1} is unique.

Proof

If AB=IAB = I and AC=IAC = I, then B=IB=(CA)B=C(AB)=CI=CB = IB = (CA)B = C(AB) = CI = C.

β– 
RemarkOne-sided inverses suffice

For square matrices, if AB=IAB = I, then automatically BA=IBA = I as well (this follows from the Rank-Nullity Theorem: AB=IAB = I implies BB is injective, hence bijective, hence BB has a left inverse, which must be AA). This fails for non-square matrices and for infinite-dimensional operators.


Examples

ExampleInverse of a 2 x 2 matrix

For A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} with adβˆ’bcβ‰ 0ad - bc \neq 0:

Aβˆ’1=1adβˆ’bc(dβˆ’bβˆ’ca).A^{-1} = \frac{1}{ad - bc}\begin{pmatrix} d & -b \\ -c & a \end{pmatrix}.

For example, (2153)βˆ’1=(3βˆ’1βˆ’52)\begin{pmatrix} 2 & 1 \\ 5 & 3 \end{pmatrix}^{-1} = \begin{pmatrix} 3 & -1 \\ -5 & 2 \end{pmatrix} since det⁑=6βˆ’5=1\det = 6 - 5 = 1.

ExampleInverse of a diagonal matrix

diag(d1,…,dn)βˆ’1=diag(1/d1,…,1/dn)\text{diag}(d_1, \ldots, d_n)^{-1} = \text{diag}(1/d_1, \ldots, 1/d_n), provided all diβ‰ 0d_i \neq 0. If any di=0d_i = 0, the matrix is singular.

ExampleIdentity is its own inverse

Inβˆ’1=InI_n^{-1} = I_n.

ExampleA singular matrix

A=(1224)A = \begin{pmatrix} 1 & 2 \\ 2 & 4 \end{pmatrix} is singular: det⁑A=4βˆ’4=0\det A = 4 - 4 = 0. Equivalently, the columns (1,2)T(1, 2)^T and (2,4)T(2, 4)^T are linearly dependent, so AA has a nontrivial kernel: A(βˆ’2,1)T=(0,0)TA(-2, 1)^T = (0, 0)^T.

ExampleInverse of a rotation matrix

RΞΈ=(cosβ‘ΞΈβˆ’sin⁑θsin⁑θcos⁑θ)R_\theta = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} has inverse Rβˆ’ΞΈ=RΞΈT=(cos⁑θsinβ‘ΞΈβˆ’sin⁑θcos⁑θ)R_{-\theta} = R_\theta^T = \begin{pmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{pmatrix}.

Rotation matrices are orthogonal: RTR=IR^T R = I, so Rβˆ’1=RTR^{-1} = R^T.

ExampleInverse of an upper triangular matrix

An upper triangular matrix is invertible iff all diagonal entries are nonzero. Its inverse is also upper triangular:

(1203)βˆ’1=(1βˆ’2/301/3).\begin{pmatrix} 1 & 2 \\ 0 & 3 \end{pmatrix}^{-1} = \begin{pmatrix} 1 & -2/3 \\ 0 & 1/3 \end{pmatrix}.

ExampleElementary matrices are invertible

Elementary matrices (corresponding to elementary row operations) are invertible:

  • Row swap: (Eij)βˆ’1=Eij(E_{ij})^{-1} = E_{ij} (swapping twice returns to original).
  • Row scaling by cβ‰ 0c \neq 0: inverse scales by 1/c1/c.
  • Adding cc times row jj to row ii: inverse subtracts cc times row jj from row ii.
ExampleInverse of a product

(AB)βˆ’1=Bβˆ’1Aβˆ’1(AB)^{-1} = B^{-1}A^{-1} (order reverses, like taking off socks and shoes).

(ABC)βˆ’1=Cβˆ’1Bβˆ’1Aβˆ’1(ABC)^{-1} = C^{-1}B^{-1}A^{-1}.

ExampleInverse of a transpose

(AT)βˆ’1=(Aβˆ’1)T(A^T)^{-1} = (A^{-1})^T. Proof: AT(Aβˆ’1)T=(Aβˆ’1A)T=IT=IA^T (A^{-1})^T = (A^{-1}A)^T = I^T = I.

ExampleInverse of a power

(Ak)βˆ’1=(Aβˆ’1)k(A^k)^{-1} = (A^{-1})^k, often written Aβˆ’kA^{-k}.

ExampleComputing the inverse by Gauss-Jordan

To find Aβˆ’1A^{-1}, augment [A∣I][A \mid I] and row-reduce to [I∣Aβˆ’1][I \mid A^{-1}].

Starting from A=(1237)A = \begin{pmatrix} 1 & 2 \\ 3 & 7 \end{pmatrix}: apply R2β†’R2βˆ’3R1R_2 \to R_2 - 3R_1 to get pivots, then R1β†’R1βˆ’2R2R_1 \to R_1 - 2R_2 to reach RREF. The right half of the augmented matrix becomes Aβˆ’1=(7βˆ’2βˆ’31)A^{-1} = \begin{pmatrix} 7 & -2 \\ -3 & 1 \end{pmatrix}.

Verify: (1237)(7βˆ’2βˆ’31)=(1001)\begin{pmatrix} 1 & 2 \\ 3 & 7 \end{pmatrix}\begin{pmatrix} 7 & -2 \\ -3 & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}.

ExampleSolving Ax = b via inverse

If AA is invertible, the unique solution to Ax=bAx = b is x=Aβˆ’1bx = A^{-1}b.

(2153)(xy)=(12)β€…β€ŠβŸΉβ€…β€Š(xy)=(3βˆ’1βˆ’52)(12)=(1βˆ’1).\begin{pmatrix} 2 & 1 \\ 5 & 3 \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 1 \\ 2 \end{pmatrix} \implies \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 3 & -1 \\ -5 & 2 \end{pmatrix}\begin{pmatrix} 1 \\ 2 \end{pmatrix} = \begin{pmatrix} 1 \\ -1 \end{pmatrix}.


The general linear group

Definition3.6General linear group

The set of all invertible nΓ—nn \times n matrices over FF forms a group under matrix multiplication:

GLn(F)={A∈MnΓ—n(F)∣AΒ isΒ invertible}.\text{GL}_n(F) = \{A \in M_{n \times n}(F) \mid A \text{ is invertible}\}.

This is the general linear group. It is a non-abelian group for nβ‰₯2n \geq 2.

ExampleElements of GL_2(R)

GL2(R)={A∈M2Γ—2(R)∣adβˆ’bcβ‰ 0}\text{GL}_2(\mathbb{R}) = \{A \in M_{2 \times 2}(\mathbb{R}) \mid ad - bc \neq 0\}. This is an open, dense subset of M2Γ—2(R)β‰…R4M_{2 \times 2}(\mathbb{R}) \cong \mathbb{R}^4, with the singular matrices forming a hypersurface of measure zero.

RemarkLooking ahead

The many equivalent conditions for invertibility are collected in the Invertible Matrix Theorem. The determinant provides a scalar test for invertibility: AA is invertible iff det⁑Aβ‰ 0\det A \neq 0 (see Chapter 4).