An adjacency matrix is a boolean square matrix that represents the adjacency relationships in a graph. Given a graph with n nodes, the adjacency matrix A_{ nxn } has entries a_{ij} = 1, if there if j is adjacent to i, and 0 otherwise (or if there is an edge from i to j). In the graph is undirected, an edge from i to j implies the existence of an edge from j to i. Generally, we talk about simple graphs with no self loops. So, a_{ii} is 0. We don’t allow multi-edges either. We can see that, the diagonal entries are all 0’s. Further, in case of an undirected graph, the adjacency matrix is symmetric; this need not be so for directed graphs.

Now, it is evident that the adjacency matrix A also represents all the paths of length 1. Each entry indicates whether there is a 1-length path between the corresponding nodes or not. It also tells us how many 1-length paths are there between the two nodes. (Of course, it is either 0 or 1.)

Interesting things happen when we multiply the adjacency matrix by itself. Let’s take some examples to see what happens.

We take the graph on the left and multiply its adjacency matrix by itself. The results are on the right. (Sorry about the bad formatting; could not figure out an easy way to align the figures properly.) The matrix ‘mat2′ is the matrix A^{2 }. The entries a_{ii} show the number of 2-length paths between the nodes i and j. Why this happens is easy to see: if there is an edge ij and an edge jk, then there will be a path ik through j. The entries ii are the degrees of the nodes i.

What happens if we compute A^{3}? Let’s hold it for now and see an example directed graph.

Here, again the entries in mat2 show the number of 2-length paths. The diagonal entries are 0’s unlike the case of undirected graphs where they show the degrees. Next, if we continue this process, the next set of entries show the number of 3-length paths. In the case of digraphs, this can be generalised. By repeated multiplications we can all paths up to lengths n – 1. If there are some non-diagonal entries that have not taken a value greater than 0 even once in the entire process, the graph is not strongly connected.

***

Now consider mat3 in either of the above cases, which is the matrix A^{3}. The trace of this matrix shows an important structural property. The trace of a matrix is the sum of the diagonal entries. Trace = sum(a_{ii}). The trace of A^{3} has a relationship with the number of triangles in the graph.

In case of undirected graphs, Trace = 6 * no. of triangles in the graph = 6 * no of K_{3}‘s

In case of directed graphs, Trace = 3 * no. of triangles in the graph

Below are two more examples to illustrate the above point.

***

We can also note that the above procedure can be used to find the diameter of graphs. We have to find the minimum number of times the adjacency matrix has to be multiplied by itself so that each entry has taken a value greater than 0 at least once. The maximum is, of course, n – 1. Now, the complexity of this procedure is O(n * n^{3}). This is an order bigger than finding the diameter by first finding the all pairs shortest paths. However, in the average case, the former fares better. Also, if we can use methods of fast matrix multiplication, it further improves the complexity.

***

Are there more interesting properties of adjacency matrices? I think so. It would be a good exercise to explore.

Er.. this is what is called computing the transitive closure of a graph, is it not? When you keep multiplying the adjacency matrix my itself, there is some point n where all A^k (k > n) will be identical to A ^n. That is when the graph is transitively closed.

What you say makes sense, but it does not seem to be happening. I am a bit confused now.

Actually, this has more information than a transitive closure. In transitive closure, we can find out about reachability. But here, we can find out about the number of paths of what length exist between nodes, not just if a path exists.

To add one more thing – transitive closure is a boolean matrix. But the A^k matrices are not.

@sri,

The procedure described above is not the transitive closure. As sanket pointed out, the transitive closure only gives the reachability, and, if represented as an adjacency matrix will be a boolean matrix. So, if graph H is the transitive closure of graph G, then

X(H) = R(G)

where X(H) is the adjacency matrix of H and R(G) is the reachability (or accessibility matrix) of G [reference: "Graph Theory" by Narsingh Deo]. Transitive closure is usually computed using the Warshall-Floyd algorithm (http://www.cs.nmsu.edu/~ipivkina/TransClosure/index.html).

These properties you pointed out are indeed very interesting.

It would be interesting to see what the semantics are if we start with a weighted adjacency matrix (i.e. if the edges of the graph have weights on them, then the entries in the matrix are these weights.)

A small addition to your observations: just like A^n gives the number of paths of length n, (A^n + A^m) gives the number of paths of length n OR m, and so on.

Siddhartha:

Thanks for the additional observations.

In case of a weighted graph, we can probably have two separate matrices – an adjacency matrix and a weight matrix. Paths of various lengths can be computed by multiplying the adjacency matrices (which are boolean) and then adding the corresponding weights to form a new weights matrix that gives the weights of the corresponding paths. Eventually, we will get the heaviest path, which is the diameter. (Disclaimer: I haven’t confirmed if this is valid. Just thinking aloud.)

I don’t see what is the problem if the matrix shows how many paths are there between nodes, rather than just a 1 or 0. Just set all non-zero values to 1 and you get the H or G or whatever you call them. It is like evaluating the if condition in C: 0 means false, non-zero means true.

Let’s not go into citation mania and name dropping please.. *feels irritated*