TheoremComplete

Matrix-Tree Theorem (Spectral Proof)

Theorem8.1Kirchhoff's Matrix-Tree Theorem

The number of spanning trees of a graph GG on nn vertices is t(G)=1ni=2nμit(G) = \frac{1}{n}\prod_{i=2}^n \mu_i where μ2μn\mu_2 \leq \cdots \leq \mu_n are the nonzero Laplacian eigenvalues. Equivalently, t(G)=1ndet(L+1nJ)t(G) = \frac{1}{n}\det(L + \frac{1}{n}J) or t(G)=t(G) = any cofactor of LL.


Proof

Proof

Setup. The Laplacian L=DAL = D - A has eigenvalues 0=μ1μ2μn0 = \mu_1 \leq \mu_2 \leq \cdots \leq \mu_n with eigenvector 1=(1,,1)T\mathbf{1} = (1,\ldots,1)^T for μ1=0\mu_1 = 0.

Step 1: Cofactor formula. We prove t(G)t(G) equals any cofactor of LL, say det(L0)\det(L_0) where L0L_0 is obtained by deleting the last row and column.

Cauchy-Binet formula. det(L0)=det(B0B0T)\det(L_0) = \det(B_0 B_0^T) where BB is the oriented incidence matrix (Bve=1B_{ve} = 1 if vv is the head of edge ee, 1-1 if tail, 0 otherwise) and B0B_0 is BB with the last row deleted. By Cauchy-Binet: det(B0B0T)=S=n1(detB0[S])2\det(B_0 B_0^T) = \sum_{|S|=n-1} (\det B_0[S])^2 where SS ranges over (n1)(n-1)-element subsets of edges.

Step 2: detB0[S]{1,0,1}\det B_0[S] \in \{-1, 0, 1\}. The submatrix B0[S]B_0[S] corresponds to a subgraph HSH_S with edge set SS. If HSH_S is a spanning tree: B0[S]B_0[S] is the reduced incidence matrix of a tree, which has det=±1\det = \pm 1 (proved by induction: a tree has a leaf, and expanding along the leaf's row gives a ±1\pm 1 times a smaller tree incidence determinant). If HSH_S is not a spanning tree: it contains a cycle or is disconnected, making B0[S]B_0[S] singular, so det=0\det = 0.

Step 3: Therefore det(L0)=T1=t(G)\det(L_0) = \sum_T 1 = t(G), summing over spanning trees TT.

Step 4: Spectral formula. Since LL has eigenvalues 0,μ2,,μn0, \mu_2, \ldots, \mu_n: det(L+tI)=ti=2n(μi+t)\det(L + tI) = t \prod_{i=2}^n (\mu_i + t) for all tt. The cofactor det(L0)\det(L_0) equals 1nlimt0det(L+tJ/n)1\frac{1}{n}\lim_{t\to 0}\frac{\det(L+tJ/n)}{1}... More directly: by the matrix determinant lemma or direct calculation, any cofactor of LL equals 1ni=2nμi\frac{1}{n}\prod_{i=2}^n \mu_i.

To see this: L=QΛQTL = Q\Lambda Q^T with QQ orthogonal and Λ=diag(0,μ2,,μn)\Lambda = \mathrm{diag}(0, \mu_2, \ldots, \mu_n). The cofactor matrix is adj(L)=det(L)(L1)\mathrm{adj}(L) = \det(L)(L^{-1}) if LL were invertible. Since rank(L)=n1\mathrm{rank}(L) = n-1, adj(L)=i=2nμi11Tn\mathrm{adj}(L) = \prod_{i=2}^n \mu_i \cdot \frac{\mathbf{1}\mathbf{1}^T}{n} (the projection onto the null space, scaled). Every entry of adj(L)\mathrm{adj}(L) equals 1ni=2nμi\frac{1}{n}\prod_{i=2}^n\mu_i, and each entry is a cofactor. \square


ExampleCayley's Formula from Matrix-Tree

For KnK_n: L=nIJL = nI - J, eigenvalues 00 (mult. 1) and nn (mult. n1n-1). So t(Kn)=nn1/n1=nn2t(K_n) = n^{n-1}/n \cdot 1 = n^{n-2}, recovering Cayley's formula. For Ka,bK_{a,b}: eigenvalues 0,a,b,a+b0, a, b, a+b with multiplicities 1,b1,a1,11, b-1, a-1, 1, giving t(Ka,b)=ab1ba1t(K_{a,b}) = a^{b-1}b^{a-1}.