Graph Representations and Matrices
Graph representations bridge combinatorial structures and linear algebra, enabling both efficient algorithms and powerful spectral analysis techniques. The choice of representation significantly impacts computational complexity and analytical capabilities.
For a graph with , the adjacency matrix is the matrix where:
For undirected graphs, is symmetric. The matrix counts walks of length : entry gives the number of walks from to using exactly edges.
The adjacency matrix has time complexity for storage and for edge existence queries, but to iterate over all neighbors. This is inefficient for sparse graphs where .
An adjacency list representation stores for each vertex a list of its neighbors. Total space is , optimal for sparse graphs. Neighbor iteration takes time, but edge existence queries require .
For the path with vertices and edges :
Adjacency matrix:
Adjacency lists: , , , .
The incidence matrix is an matrix where:
For directed graphs, we use to indicate edge orientation.
The Laplacian matrix where is the diagonal degree matrix with . The Laplacian is positive semidefinite with:
The Laplacian eigenvalues encode deep structural information: the number of zero eigenvalues equals the number of connected components, and the second-smallest eigenvalue (algebraic connectivity) measures how well-connected the graph is. This connection between linear algebra and graph structure is central to spectral graph theory.
Matrix representations enable efficient algorithms (BFS, DFS, shortest paths) and provide tools for spectral analysis, random walks, and graph partitioning. The interplay between combinatorial and algebraic perspectives is a recurring theme in modern graph theory.