Matrix Operations
Matrices are the computational backbone of linear algebra. Every linear map between finite-dimensional spaces can be represented as a matrix, and matrix operations correspond to operations on linear maps.
Definitions
An matrix over a field is a rectangular array with rows and columns, where . The entry in row and column is denoted or or .
For and :
With these operations, is a vector space of dimension .
For and , the product is defined by
The entry of is the dot product of the -th row of with the -th column of .
Examples
. In fact, but : the product of two nonzero matrices can be zero.
is a linear combination of the columns of with coefficients from . This is the column picture of matrix-vector multiplication.
The identity matrix has 's on the diagonal and 's elsewhere: (the Kronecker delta). For any :
If and , then
Diagonal matrices always commute with each other.
For a square matrix , and .
(proved by induction).
satisfies . A matrix with for some is called nilpotent. The smallest such is the index of nilpotency.
satisfies . Such a matrix is called idempotent and represents a projection.
Block (partitioned) matrices can be multiplied "as if blocks were scalars," provided dimensions match:
For column vectors and , the outer product is an matrix of rank at most 1:
Every rank-1 matrix has this form.
For the transpose (with ):
- (order reverses)
If has matrix and has matrix , then has matrix . Matrix multiplication is composition.
Matrix representation of linear maps
Let be linear, a basis for , and a basis for . The matrix of with respect to and is the matrix whose -th column is , the coordinate vector of with respect to .
, , with standard bases.
and , so .
If is the change-of-basis matrix from to and is the change-of-basis matrix from to , then
For a linear operator with two bases and related by :
Matrices related by are called similar and represent the same linear operator in different bases.
The question of finding the "simplest" matrix for a given linear operator (via a good choice of basis) leads to eigenvalues, diagonalization, and ultimately Jordan normal form.